docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Course
Build a Twitter topic extractor
https://discuss.huggingface.co/t/build-a-twitter-topic-extractor/11571
Please read the topic category description 8 to understand what this is all about Description Twitter classifies trending tweets according to predefined set of topics like “Data science”, “Hip hop”, “Sport” etc. However, their algorithm appears to often get confused by certain keywords in the tweet or the content of the image (see here 14 for some funny examples). The goal of this project is to explore whether it’s possible to create a better topic extractor, or at least one that is more targeted for a smaller set of domains. There’s several ways to approach to task: Frame it as a multiclass classification problem Frame it as an unsupervised clustering problem, and combine this with techniques like UMAP 3 and/or HDBSCAN Model(s) Since Tweets are short in length, picking one of the sentence-transformers models on the Hub 4 is likely a good place to start. Datasets There are various Twitter datasets on the Hub and here’s a few examples to start with: tweet_eval 10 sentiment140 8 emotion 8 Challenges If you take the unsupervised learning approach, be warned that there’s no “correct” answer and you will have to experiment with the dimensionality reduction / cluster algorithms to get meaningful clusters. Desired project outcomes Create a Streamlit or Gradio app on Spaces 1 that either can automatically classify a Tweet according to a topic, or visualises the 2D projection of the embeddings and colours them by topic. Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources https://towardsdatascience.com/how-exactly-umap-works-13e3040e1668 2 https://huggingface.co/spaces/edugp/embedding-lenses 6 (useful for UMAP inspiration) Discord channel To chat and organise with other people interested in this project, head over to our Discord 1 and: Follow the instructions on the #join-course channel Join the #twitter-topic-extractor channel Just make sure you comment here to indicate that you’ll be contributing to this project
I’d like to work on this project!
0
huggingface
Course
Project: Create a new zero-shot model with NLI data
https://discuss.huggingface.co/t/project-create-a-new-zero-shot-model-with-nli-data/11858
Description The zero-shot classification pipeline has becomes very popular on Hugging Face. It allows you to classify a text in any category without having to fine-tune a model for the specific classification task you are interested in. The zero-shot pipeline is based on models trained on Natural Language Inference (NLI). This project will train a new NLI model, which can then be used in the zero-shot classification pipeline. Model(s) Any base-model can be used. Since there are already several NLI models on the model hub, I suggest to train a new model based on Microsoft’s DeBERTa-v3 5 model. Version three 2 was only published few weeks ago and can outperform larger models (see an example here 5). We can probably create a new SOTA NLI model with the new DeBERTa version and enough NLI data. Datasets Established NLI datasets include: MultiNLI 1 SNLI 1 ANLI Other interesting NLI datasets include: FEVER-NLI 2 DocNLI 1 LingNLI More datasets can be included! Challenges NLI models can be trained as either 3-class classifiers (entailment/neutral/contradiction) or as 2-class classifiers (entailment/not_entailment). Both setups have different advantages and disadvantages There is a lot of NLI data (2 mio++ texts in the datasets linked above), which makes training computationally expensive. Optimising the training pipeline is a challenge. Many different datasets can be translated into NLI-format. Including more datasets can be beneficial, but requires manual transformation of datasets. Desired project outcomes Create a Streamlit or Gradio app on Spaces that provides an interface for zero-shot classification with a new NLI model in the backend. Additional resources See the links to the datasets above. Also see Joe Davidson’s original blog post 8 on the zeroshot pipeline Discord channel To chat and organise with other people interested in this project, head over to our Discord 8 and: Follow the instructions on the #join-course channel Join the #zero-shot channel Just make sure you comment here to indicate that you’ll be contributing to this project
Hyee! I’d love to contribute in this one. I guess further discussion will take place in Discord?
0
huggingface
Course
Create your own writing assistant
https://discuss.huggingface.co/t/create-your-own-writing-assistant/11568
Please read the topic category description 12 to understand what this is all about Description Many email and word processing applications can now automatically detect and correct common grammatical errors as you write. For example, the sentence “I am doing fine. How is you?” might be corrected to “I am doing fine. How are you?”. The goal of this project is to fine-tune a Transformer model to be able to automatically provide these corrections, similar to how Grammarly 1 does Model(s) This task can be viewed as a sequence-to-sequence task, so models like T5 would be a great starting point Datasets jfleg 11 Challenges If you use T5, you’ll need to define a suitable prefix for the text-to-text task. You’ll also need to think about suitable metrics for the evaluation. Desired project outcomes Create a Streamlit or Gradio app on Spaces that can automatically provide suggestions to improve the grammar of some input text. Check out Grammarly for some inspiration on the visualization side. Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources https://towardsdatascience.com/fine-tune-a-transformer-model-for-grammar-correction-b5c8ca49cc26 14 https://github.com/PrithivirajDamodaran/Gramformer 2 Discord channel To chat and organise with other people interested in this project, head over to our Discord 7 and: Follow the instructions on the #join-course channel Join the writing-assistant channel Just make sure you comment here to indicate that you’ll be contributing to this project Team organization on the Hub To join this team, make sure you join the following organisation on the Hub: team-writing-assistant (🤗 Course Team Writing Assistant) 13
Hi, I am Ashish and I am interested in this project and would like to build it.
0
huggingface
Course
Create an AI assistant for lawyers
https://discuss.huggingface.co/t/create-an-ai-assistant-for-lawyers/11512
Please read the topic category description 11 to understand what this is all about Description The Contract Understanding Atticus Dataset (CUAD) is a new dataset for legal contract review. Legal contracts often contain a small number of important clauses that warrant review by lawyers. This is a time-intensive task that requires specialised knowledge, so the goal of this project is to see if Transformer models can be used to extract answers to a predefined set of legal questions. Model(s) Many of the Question Answering models 3 on the Hub could serve as a good baseline to get started. Given the specialised domain, you will probably want to try: Fine-tuning encoder-based models like BERT, RoBERTa, DeBERTa and friends Performing domain adaptation, by first fine-tuning the language model before tuning the question-answering head Datasets CUAD 16 is available on the Hub. Challenges This is a highly specialised domain, so a vanilla Transformer may not obtain great results. Desired project outcomes Create a Streamlit of Gradio app on Spaces that allows someone to select a legal contract, one or more questions, and provide the answer. Additional resources CUAD announcement 9 CUAD paper 10 CUAD codebase 1 Discord channel To chat and organise with other people interested in this project, head over to our Discord 2 and: Follow the instructions on the #join-course channel Join the #ai-law-assistant channel Just make sure you comment here to indicate that you’ll be contributing to this project
Hi @lewtun, my name is Pavle and I would be interested in working on this project.
0
huggingface
Course
Create a SentenceTransformer in Dhivehi using ELECTRA
https://discuss.huggingface.co/t/create-a-sentencetransformer-in-dhivehi-using-electra/11938
Description Dhivehi is a low resource language. Since the available data is less, ELECTRA seems to be a good option as it requires less computing power and training data as compared to others. Model electra-small pretrained in dhivehi available here 2 Discord channel To chat and organise with other people interested in this project, head over to our Discord and: Follow the instructions on the #join-course channel Join the #sentence-transformers-dhivehi channel Just make sure you comment here to indicate that you’ll be contributing to this project
He @ashraq thanks for proposing this interesting project! One question: what do you mean by create a “sentence transformer”? Are you talking about adding a pooling layer to the electra-small model and then training that on a Dhivevi corpus? Do you also happen to have access / know of a Dhivevi corpus to train on?
0
huggingface
Course
Image captioning for low resource Indian Languages
https://discuss.huggingface.co/t/image-captioning-for-low-resource-indian-languages/11764
There are many image captioning systems exist for english language, here in this project we will develop an Image captioning system for an Indian language If we have time and resource, we can extend this to other languages as well. Datasets: Dataset can be created by translating captions of existing Flickr30k or any other image captioning dataset An example: https://www.amitavadas.com/Image2Tweet.html 5 Other resources: Vision encoder Decoder model: Vision Encoder Decoder Models — transformers 4.12.2 documentation 4 Baseline: huggingface.co flax-community/image-captioning at main 7 Discord channel To chat and organise with other people interested in this project, head over to our Discord 2 and: Follow the instructions on the #join-course channel Join the #image-captioning channel Just make sure you comment here to indicate that you’ll be contributing to this project
Hey Sean - Looks really interesting. I am interested. Joined the Discord Channel.
0
huggingface
Course
Create a pop music Transformer
https://discuss.huggingface.co/t/create-a-pop-music-transformer/11526
Please read the topic category description 2 to understand what this is all about Description If you treat music notation as a form of “text”, you can use language modelling to generate new songs! The goal of this project is to explore how well Transformers perform at music modelling. Model(s) None that we could find on the Hub, but see here 6 for some pretrained music Transformers Datasets None that we could find on the Hugging Face Hub, but see here 4 for some ideas Challenges This task probably involves pretraining a Transformer, which can potentially take multiple days using the GPU resources provided by AWS. An interesting alternative would be to see whether one can integrate an existing pretrained model within Transformers and use that to either fine-tune. Desired project outcomes Create a Streamlit or Gradio app on Spaces 1 that people can remix famous songs with newly generated ones Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources https://towardsdatascience.com/creating-a-pop-music-generator-with-the-transformer-5867511b382a 3 https://arxiv.org/abs/2002.00212 1 https://openai.com/blog/jukebox/ https://research.google/teams/brain/magenta/ 1 Discord channel To chat and organise with other people interested in this project, head over to our Discord 1 and: Follow the instructions on the #join-course channel Join the #pop-music-transformer channel Just make sure you comment here to indicate that you’ll be contributing to this project
This one seems pretty difficult, but too interesting to pass up. I’ll try working on it.
0
huggingface
Course
Use OpenAI’s CLIP for image search
https://discuss.huggingface.co/t/use-openais-clip-for-image-search/11577
Please read the topic category description 5 to understand what this is all about Description One of the most exciting developments in 2021 was the release of OpenAI’s CLIP model, which was trained on a variety of (text, image) pairs. One of the cool things you can do with this model is use it for text-to-image and image-to-image search (similar to what is possible when you search for images on your phone). The goal of this project is to experiment with CLIP and learn about multimodal models. Several ideas can be explored, including: Create a text-to-image search engine that allows users to search for images based on natural language queries. Although CLIP was only trained for English text, you can use techniques like Multilingual Knowledge Distillation 2 to extend the embeddings to new languages Create an image-to-image search engine that returns similar images, given a “query” image. Model(s) The CLIP models 5 can be found on the Hub Datasets A common dataset that’s used for image demos is the Unsplash Dataset. You can get access to it here 1 Challenges This project goes beyond that concepts introduced in Part II of the Course, so some familiarity with computer vision would be useful. Having said that, the Transformers API is similar for image tasks, so if you know how the pipeline() function works, then you’ll have no trouble adapting to this new domain. Desired project outcomes Create a Streamlit or Gradio app on Spaces 1 that allows a user to find images that resemble a natural language query or input image. Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources https://www.sbert.net/examples/applications/image-search/README.html 8 Discord channel To chat and organise with other people interested in this project, head over to our Discord 3 and: Follow the instructions on the #join-course channel= Join the #image-search channel (currently full!) Join the #image-search-group2 channel Just make sure you comment here to indicate that you’ll be contributing to this project
Hi @lewtun ! I’d love to work on this project.
0
huggingface
Course
[Nov 15th Event] It Ain’t Broke So D̶o̶n̶’t̶ F̶i̶x̶ Let’s Break It
https://discuss.huggingface.co/t/nov-15th-event-it-aint-broke-so-d-o-n-t-f-i-x-lets-break-it/11754
Use this topic to ask your questions to Jakob Uszkoreit during his talk: It Ain’t Broke So D̶o̶n̶’t̶ F̶i̶x̶ Let’s Break It. You can watch it on YouTube 4 or on Twitch at 11:45am PST.
What does it mean to “marginalize out” dependencies?
0
huggingface
Course
Create a detector of toxicity from political tweets in Spain
https://discuss.huggingface.co/t/create-a-detector-of-toxicity-from-political-tweets-in-spain/11910
Please read the topic category description 1 to understand what this is all about Description The goal of this project is to automatically identify toxic speech emitted by politicians on Twitter. It is focused on Spain which is an interesting multilingual case with several co-official languages which are used interchangeably in politics. Model(s) Multilingual models like xlm-roberta-base. Datasets tweet_eval 1 is a related resource, but it is English-only. Challenges Getting high-quality data in Spanish and/or integrating data in other languages. Desired project outcomes Create a Streamlit or Gradio app on Spaces that is able to detect toxicity from tweets. Discord channel To chat and organise with other people interested in this project, head over to our Discord and: Follow the instructions on the #join-course channel Join the #toxic-tweets-es channel Just make sure you comment here to indicate that you’ll be contributing to this project
I’d love to work on this project!
0
huggingface
Course
Create an ADR (Adverse drug reaction) extraction model from unstructured text
https://discuss.huggingface.co/t/create-an-adr-adverse-drug-reaction-extraction-model-from-unstructured-text/11859
Create an ADR (Adverse drug reaction) extraction model for unstructured text: An adverse drug reaction (ADR) can be defined as an appreciably harmful or unpleasant reaction resulting from an intervention related to the use of a medicinal product. The goal of this project is to extracts the “ADR” term from the unstructured text . Like social media text or any EHR document. For example : “Rash in hands caused by omeprazole.” here we have ADR: “Rash” which is caused by omeprazole. Model(s) The bio_bert model are good to start for the ADR extraction task. https://huggingface.co/dmis-lab/biobert-base-cased-v1.2 1 https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT 2 https://huggingface.co/gsarti/biobert-nli Datasets ade_corpus_v2 3 is usually a good corpus to test the ADR extraction model and you can find other open source ADR corups s well. Challenges you are given a social media post or raw text from EHR document and you need to extract “ADR” mention as accurately as possible. Desired project outcomes Create a Streamlit or Gradio app on huggingface, Spaces that can Extract “ADR” mention from the raw text. Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources https://huggingface.co/datasets/ade_corpus_v2#additional-information 1 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6947008/ https://towardsdatascience.com/automated-adverse-drug-event-ade-detection-from-text-in-spark-nlp-with-biobert-837c700f5d8c 1 Discord channel To chat and organise with other people interested in this project, head over to our Discord 1 and: Follow the instructions on the #join-course channel Join the #adr-extraction channel Just make sure you comment here to indicate that you’ll be contributing to this project
Hi, I am interested in this project
0
huggingface
Course
[Nov 16th Event] Philipp Schmid: Managed Training with Amazon SageMaker and Transformers
https://discuss.huggingface.co/t/nov-16th-event-philipp-schmid-managed-training-with-amazon-sagemaker-and-transformers/11887
Use this topic to ask your questions to Philipp Schmid during his talk: Managed Training with Amazon SageMaker and Transformers You can watch it on YouTube 2 or on Twitch 1 at 11:25am PST Notebook link 6
Hi, for the course project, can multiple team members launch training at the same time?
0
huggingface
Course
[Nov 16th Event] Mathieu Desvé: AWS ML Vision: Making Machine Learning Accessible to all Customers
https://discuss.huggingface.co/t/nov-16th-event-mathieu-desve-aws-ml-vision-making-machine-learning-accessible-to-all-customers/11886
Use this topic to ask your questions to Mathieu Desvé during his talk: AWS ML Vision: Making Machine Learning Accessible to all Customers You can watch it on YouTube 2 or on Twitch 1 at 11am PST
Is there any tensorboard-like tool to monitor model training on Sagemaker? This was answered at 1:23:33 in the live stream
0
huggingface
Course
[Nov 16th Event] Matthew Carrigan: New TensorFlow Features for Transformers and Datasets
https://discuss.huggingface.co/t/nov-16th-event-matthew-carrigan-new-tensorflow-features-for-transformers-and-datasets/11879
Use this topic to ask your questions to Matthew Carrigan during his talk: New TensorFlow Features for Transformers and Datasets You can watch it on YouTube 1 or on Twitch at 8:30am PST
Could the notebook shown in the video be linked?
0
huggingface
Course
[Nov 16th Event] Abubakar Abid: Building Machine Learning Applications Fast
https://discuss.huggingface.co/t/nov-16th-event-abubakar-abid-building-machine-learning-applications-fast/11884
Use this topic to ask your questions to Abubakar Abid during his talk: Building Machine Learning Applications Fast You can watch it on YouTube 4 or on Twitch at 10:35am PST
Will Gradio comes out a friendly desktop interface in future? Sometimes the ML model cannot be used publicly but instead is used internally inside a company.
0
huggingface
Course
[Nov 16th Event] Sylvain Gugger: Supercharge your PyTorch training loop with Accelerate
https://discuss.huggingface.co/t/nov-16th-event-sylvain-gugger-supercharge-your-pytorch-training-loop-with-accelerate/11881
Use this topic to ask your questions to Sylvain Gugger during his talk: Supercharge your PyTorch training loop with Accelerate You can watch it on YouTube 8 or on Twitch 1 at 9:50am PST
Can Accelarate help with hyperparameter optimization, same way Ray Tune does?
0
huggingface
Course
[Nov 16th Event] Lucile Saulnier: Get your own tokenizer with Transformers & Tokenizers
https://discuss.huggingface.co/t/nov-16th-event-lucile-saulnier-get-your-own-tokenizer-with-transformers-tokenizers/11882
Use this topic to ask your questions to Lucile Saulnier during her talk: Get your own tokenizer with Transformers & Tokenizers You can watch it on YouTube 5 or on Twitch 1 at 9:25am PST
And here are the links to the notebook presented: colab link 1 and raw notebook
0
huggingface
Course
[Nov 16th Event] Lysandre Debut: The Hugging Face Hub as a means to collaborate on and share Machine Learning projects
https://discuss.huggingface.co/t/nov-16th-event-lysandre-debut-the-hugging-face-hub-as-a-means-to-collaborate-on-and-share-machine-learning-projects/11880
Use this topic to ask your questions to Lysandre Debut during his talk: The Hugging Face Hub as a means to collaborate on and share Machine Learning projects You can watch it on YouTube 2 or on Twitch at 9am PST
Can we use git tag in hub for model versioning ?
0
huggingface
Course
[Nov 16th Event] Lewis Tunstall: Simple Training with the Transformers Trainer
https://discuss.huggingface.co/t/nov-16th-event-lewis-tunstall-simple-training-with-the-transformers-trainer/11878
Use this topic to ask your questions to Lewis Tunstall during his talk: Simple Training with the Transformers Trainer You can watch it on YouTube 3 or on Twitch 1 at 8am PST Link to Notebook 9
Is it possible to share the link to the notebook Lewis is working on here?
0
huggingface
Course
Use EncoderDecoder models for text summarization
https://discuss.huggingface.co/t/use-encoderdecoder-models-for-text-summarization/11525
Please read the topic category description 4 to understand what this is all about Description Most of the available Transformer models for text summarization are only available for English documents. At the same time, there are now many pretrained BERT-like models in non-English languages. The goal of this project is to explore whether the [EncoderDecoder architecture](Encoder Decoder Models — transformers 4.12.2 documentation 4) in Transformers can be used to create summarization models using just the pretrained weights of encoder-based models. Your task is to pick a pretrained encoder in a non-English language and train it to summarise texts in that language. Model(s) See here 7 for example models that people have fine-tuned using this architecture. You task is to create your very own model with this technique! Datasets Search for summarization datasets on the Hub 1 to get an appropriate corpus for this task Challenges Text summarization is a tricky NLP task, so the performance obtained with these models may not match what is observed for their English couterparts (where much more data is available) Desired project outcomes Create a Streamlit or Gradio app on Spaces 1 that can summarize a document in your chosen language Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources Leveraging Pre-trained Checkpoints for Sequence Generation Tasks [PAPER] Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models 1 [BLOG POST] Examples of these models on the Hub by @mrm8488: https://twitter.com/mrm8488/status/1458475725565141001?s=20 3 Discord channel To chat and organise with other people interested in this project, head over to our Discord 4 and: Follow the instructions on the #join-course channel. Then join one of the following channels: #encoder-decoder-es channel (Spanish) Just make sure you comment here to indicate that you’ll be contributing to this project
Interesting project, I am interested in a Spanish summarizer using Encoder-Decoder model. Anyone else interested in this approach?
0
huggingface
Course
Use OpenAI’s CLIP for style transfer
https://discuss.huggingface.co/t/use-openais-clip-for-style-transfer/11833
Please read the topic category description to understand what this is all about Description One of the most exciting developments in 2021 was the release of OpenAI’s CLIP model, which was trained on a variety of (text, image) pairs. One of the cool things you can do with this model is use it to combine text and image embeddings to perform neural style transfer. In neural style transfer, the idea is to provide a prompt like “a starry night painting” and an image, and then get the model to produce a painting of the image in that style. The goal of this project is to learn whether CLIP can produce good paintings from text prompts. Model(s) The CLIP models 2 can be found on the Hub Datasets For this project, you probably won’t need an actual dataset to perform neural style transfer. Just a single image should be enough to tune CLIP and an image encoder. Of course, you are free to experiment with larger datsets if you want! Challenges This project goes beyond that concepts introduced in Part II of the Course, so some familiarity with computer vision would be useful. Having said that, the Transformers API is similar for image tasks, so if you know how the pipeline() function works, then you’ll have no trouble adapting to this new domain. Desired project outcomes Create a Streamlit or Gradio app on Spaces 1 that allows a user to provide an image and a text prompt, and produces a painting of that image in the desired style Additional resources You can Google “neural style transfer” to find plenty of information about this technique. Here one advanced example to give you an idea: GitHub - orpatashnik/StyleCLIP: Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral) 6 Discord channel To chat and organise with other people interested in this project, head over to our Discord 2 and: Follow the instructions on the #join-course channel Join the #neural-style-transfer channel Just make sure you comment here to indicate that you’ll be contributing to this project
Very interesting topic, I am in!
0
huggingface
Course
[Nov 15th Event] Thomas Wolf: Transfer Learning and the birth of the Transformers library
https://discuss.huggingface.co/t/nov-15th-event-thomas-wolf-transfer-learning-and-the-birth-of-the-transformers-library/11748
Use this topic to ask your questions to Thomas Wolf during his talk: Transfer Learning and the birth of the Transformers library You can watch it on YouTube 23 or on Twitch 5 at 8am PST Slides 10
This is an example of test question. You can test the like button below.
0
huggingface
Course
[Nov 15th Event] Matthew Watson - Chen Qian: NLP workflows with Keras
https://discuss.huggingface.co/t/nov-15th-event-matthew-watson-chen-qian-nlp-workflows-with-keras/11751
Use this topic to ask your questions to Matthew Watson and Chen Qian during their talk: NLP workflows with Keras. You can watch it on YouTube 2 or on Twitch 1 at 10:15am PST Colab notebooks: part 1 10, part 2 6.
Since a lot of people use both keras and pytorch especially while using HuggingFace. Have you guys considered making the code torch code transferrable to keras (to some extent at least) and vice versa so as to ease it for everyone?
0
huggingface
Course
Image neural search
https://discuss.huggingface.co/t/image-neural-search/11826
i didn’t know how google search works well, but when i type text i got related images. same search case i will try to build using AI transformers model. it involves finding text and image embeddings and find cosine similarity, better score close to 1 leads to result. lets have fun building Sample -
There’s a very similar project here already: Use OpenAI's CLIP for image search 4
0
huggingface
Course
[Nov 15th Event] Margaret Mitchell: On Values in ML Development
https://discuss.huggingface.co/t/nov-15th-event-margaret-mitchell-on-values-in-ml-development/11758
Use this topic to ask your questions to Margaret Mitchell during her talk: On Values in ML Development. You can watch it on YouTube 6 or on Twitch 2 at 9:30am PST
How could we balance the filtering of bias between freedom of speech and offensive/biased content?
0
huggingface
Course
[Nov 15th Event] Jay Alammar: A gentle visual intro to Transformers models
https://discuss.huggingface.co/t/nov-15th-event-jay-alammar-a-gentle-visual-intro-to-transformers-models/11749
Use this topic to ask your questions to Jay Alammar during his talk: A gentle visual intro to Transformers models. You can watch it on YouTube 9 or on Twitch at 8:45am PST
Can we get the URL Jay was sharing at the start?
0
huggingface
Course
Tensorflow in Part 2 of the course
https://discuss.huggingface.co/t/tensorflow-in-part-2-of-the-course/11728
Hi, I’m doing the second part of the course now, in particular, the chapter “The Datasets library”. In Part 1, I was following the tensorflow option but it seems that now only the pytorch one is available (when I select tensorflow, it still shows the pytorch-based tutorial). Are you planning to release the tensorflow tutorial for Part 2 also? Thanks in advance!
Hi Lenn! All the sections have a TensorFlow version. Chapter 5 is completely framework agnostic, that’s why you don’t see any differences between the two, but if you look at chapter 7, you will see the content is very different.
0
huggingface
Course
Create a docstring generator
https://discuss.huggingface.co/t/create-a-docstring-generator/11569
Please read the topic category description 4 to understand what this is all about Description Applications like GitHub’s CoPilot can automatically generate docstrings from a class or function name. The goal of this project is to fine-tune a Transformer like CodeT5 to do this ourselves! Model(s) Generating docstrings from source code can be modelled as a sequence-to-sequence task, so T5 models are a good starting point here: Salesforce/codet5-base 3 Datasets A good dataset for this task is code_search_net, but feel free to find alternative datasets if you can’t find your favourite programming language there. Challenges Models like CodeT5 are rather large and you’ll need to think about what metrics one should use for this type of task. Desired project outcomes Create a Streamlit or Gradio app on Spaces that can automatically generate a docstring from a class of function name in your favourite programming language! Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources https://arxiv.org/abs/2109.00859v1 1 https://blog.einstein.ai/codet5/ Discord channel To chat and organise with other people interested in this project, head over to our Discord 1 and: Follow the instructions on the #join-course channel Join the #docstring-generator channel Just make sure you comment here to indicate that you’ll be contributing to this project
I find this task interesting and would like to find out more about how to contribute: for example, would fine-tuning for PL to NL (code comment/docstring) generation for Python be a suitable case for this project?
0
huggingface
Course
Create a NER tagger for African languages
https://discuss.huggingface.co/t/create-a-ner-tagger-for-african-languages/11524
Please read the topic category description 2 to understand what this is all about Description Africa has over 2,000 spoken languages, but these languages are massively underrepresented in NLP research and datasets. The goal of this project is to train strong models for the MasakhaNER corpus, which is a high quality dataset for named entity recognition in 10 African languages. Model(s) There are a few popular multilingual models that you can start with: xlm-roberta-base bert-base-multilingual-uncased Datasets masakhaner 3 Challenges It is unlikely that all ten languages in MasakhaNER are represented in multiingual models like XLM-R or mBERT, so some decisions will be need to be made on which subsets to focus on. Desired project outcomes Create a Streamlit or Gradio app on Spaces 1 that can take text from one or more of the languages in MasakhaNER and extract the person names (PER), organizations (ORG), locations (LOC) and dates & time (DATE) tags. Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources MasakhaNER: Named Entity Recognition for African Languages 1 Discord channel To chat and organise with other people interested in this project, head over to our Discord 2 and: Follow the instructions on the #join-course channel Join the african-ner channel Just make sure you comment here to indicate that you’ll be contributing to this project Team organization on the Hub To join this team, make sure you join the following organisation on the Hub: team-african-ner (🤗 Course Team African NER) 1
I am interested in this project, don’t know African languages, but would be delighted to create something useful for the community, Count me in!
0
huggingface
Course
Part II of the Course goes live in November!
https://discuss.huggingface.co/t/part-ii-of-the-course-goes-live-in-november/11100
Hello everyone We’re super excited to announce that part II of the course 26 will be released on November 15th summary2668×807 234 KB This part of the course takes a deep dive into the Datasets and Tokenizers libraries, so you’ll learn how to process huge datasets and train your very own tokenizers on them . You’ll also learn about the core NLP tasks that Transformers excel at, and even how to debug your training pipelines effectively. To celebrate the launch, we’re planning a large community event 19 to which you’re all invited! The event involves two days of talks on Nov 15-16 from experts in the field. To register for the event, please fill out this form : 🤗 Course Community Event 35 After the talks, you will have the opportunity to collaborate on group projects and demo your work as a Streamlit or Gradio app on Hugging Face Spaces 5 We’re looking forward to seeing you at the live event and feel free to post any questions you might have here!
Thank you all for creating these courses; I can’t wait to read part II. I read the first part a few months ago, and it helped me complete my bachelor final project and ever since then, I have been learning more and more about your excellent work, transformers, and NLP.
0
huggingface
Course
Share your projects!
https://discuss.huggingface.co/t/share-your-projects/6803
After following the first section of the course, you should be able to fine-tune a model on a text classification problem and upload it back to the Hub. Share your creations here and if you build a cool app using your model, please let us know!
It’s not exactly a project, but I’m super excited to share my first public Kaggle dataset kaggle.com Huggingface Modelhub 14 Dataset containing information on all the models on HuggingFace modelhub With the help from good folks at HF, I was able to query the metadata information available on model-hub and upload it as a Kaggle dataset. It should be helpful to anyone looking to analyze and create EDA/Text-processing notebooks on the metadata of publicly available models. The dataset contains the README modelcard data as well. Please have a look and provide feedback.
0
huggingface
Course
Chapter 2 questions
https://discuss.huggingface.co/t/chapter-2-questions/6799
Use this topic for any question about Chapter 2 15 of the course.
In the Handling multiple sequences page of Chapter 2, there is a bug in the code under Attention masks section. Page: Using 🤗 Transformers - Hugging Face Course 8 The PyTorch toggle is on, but the code uses Tensorflow’s tf.constant function. Screenshot from 2021-06-15 16-57-42970×796 70.6 KB There is a typo on https://huggingface.co/course/chapter2/6?fw=pt 1 Screenshot from 2021-06-15 17-12-37917×347 39.9 KB Isn’t wordpiece a subword algorithm as well? image976×517 38.6 KB
0
huggingface
Course
Accuracy is stagnant
https://discuss.huggingface.co/t/accuracy-is-stagnant/9891
Hello … I am following the course but using a different dataset from load_dataset and slight mods. When I run this code, the accuracy remains constant. I am expecting in the best scenario the accuracy to improve, if not have some variation. But it remains constant between each epoch. Any idea? from tqdm.auto import tqdm progress_bar = tqdm(range(num_epochs*num_steps)) for epoch in range(num_epochs): model.train() for batch in train_dl: model_inputs = {k:v.to(device) for k, v in batch.items()} outputs = model(**model_inputs) loss = outputs.loss loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) model.eval() metric = load_metric('accuracy') for batch in eval_dl: model_inputs = {k:v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**model_inputs) logits = outputs.logits predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=model_inputs['labels']) print(metric.compute()) Output is: {‘accuracy’: 0.2112} {‘accuracy’: 0.2112} {‘accuracy’: 0.2112}
This definitely shows your model is not training. A few things to check are: maybe the learning rate is too high/too low? maybe there is some problems in your labels and the model can’t learn?
0
huggingface
Course
Chapter 3 problem
https://discuss.huggingface.co/t/chapter-3-problem/8578
I’m getting an error that says all my inputs are scalars. It would be helpful to see a completed working file, as I’m a bit confused about the order. Here’s my messy code: #!/bin/env python3 import torch from transformers import AdamW, AutoModelForSequenceClassification from datasets import load_dataset from transformers import AutoTokenizer from transformers import logging from torch.utils.data import DataLoader from transformers import DataCollatorWithPadding from transformers import TrainingArguments from transformers import Trainer logging.set_verbosity_error() raw_datasets = load_dataset("glue", "mrpc") checkpoint = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSequenceClassification.from_pretrained( checkpoint, num_labels=2, ) def tokenize_function(example): return tokenizer( example["sentence1"], example["sentence2"], truncation=True, ) tokenized_datasets = raw_datasets.map( tokenize_function, num_proc=4, batched=True, ) tokenized_dataset = tokenized_datasets.rename_column("label", "labels") data_collator = DataCollatorWithPadding(tokenizer=tokenizer) # train_dataloader = DataLoader( # tokenized_dataset["train"], # batch_size=16, # shuffle=True, # collate_fn=data_collator, # ) training_args = TrainingArguments( "test-trainer", per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=5, learning_rate=2e-5, weight_decay=0.01, ) trainer = Trainer( model, training_args, train_dataset=tokenized_dataset["train"], eval_dataset=tokenized_datasets["validation"], tokenizer=tokenizer, ) trainer.train() # optimizer = AdamW(model.parameters()) # loss = model(**batches).loss # loss.backward() # optimizer.step()
It’s hard to know what the problem is if you don’t copy the error message as well. I just tried your code and it runs fine on my side.
0
huggingface
Course
Error importing AutoModel
https://discuss.huggingface.co/t/error-importing-automodel/8574
Hi, im just trying to run the code from the chapter google collaboratory but i keep getting this error: image913×484 19.9 KB I am not sure why this is happening. I tried importing other AutoModel variations and i still get the same error. Please help the link to the notebook i was trying to run: Google Colaboratory
it works now! thank you to whoever fixed it!
0
huggingface
Course
Chapter 4 questions
https://discuss.huggingface.co/t/chapter-4-questions/6801
Use this topic for any question about Chapter 4 18 of the course.
Can someone take a look at the chapter 4 notebook. The .push_to_hub() method isn’t working, at first the error was about ‘git-lfs’ and even downloading that doesn’t seem to work… Thanks
0
huggingface
Course
Where to get the best online Courses
https://discuss.huggingface.co/t/where-to-get-the-best-online-courses/7002
I heard the udemy is good place to finds course, but I cant access it will a vpn can help.
Besides udemy, you can also try Coursera and Udacity udacity.com Explore our Programs and Courses | Udacity Catalog 3 Get the latest tech skills to advance your career. Browse Nanodegree programs in AI, automated systems & robotics, data science, programming and business. Coursera Coursera | Build Skills with Online Courses from Top Institutions 2 Join Coursera for free and learn online. Build skills with courses from top universities like Yale, Michigan, Stanford, and leading companies like Google and IBM. Advance your career with degrees, certificates, Specializations, & MOOCs in data...
0
huggingface
Course
Live sessions per chapter
https://discuss.huggingface.co/t/live-sessions-per-chapter/6804
To launch the course, the team developing it will be hosting live sessions where you can ask any questions. Those live sessions will be on Twitch at the following times (the two sessions for each chapter will cover the same material): Chapter 1 Chapter 1 with Lysandre 253: Wednesday, June 16th (8:00-9:00 UTC) Chapter 1 with Sylvain 118: Thursday, June 17th (18:00-19:00 UTC) Chapter 2 Chapter 2, with Lewis 29: Wednesday, June 23rd (8:00-9:00 UTC) Chapter 2, with Sylvain 68: Thursday, June 24th (18:00-19:00 UTC) Chapter 3 Chapter 3 — PyTorch, with Lewis 39: Wednesday, June 30th (8:00-9:00 UTC) Chapter 3 — PyTorch, with Sylvain 38: Thursday, July 1st (18:00-19:00 UTC) Chapter 3 — TensorFlow, with Matt 26: Wednesday, June 30th (17:00-18:00 UTC) Chapter 3 — TensorFlow, with Matt 23: Thursday, July 1st (10:00-11:00 UTC) Chapter 4 Chapter 4, with Omar 29: Wednesday, July 7th (8:00-9:00 UTC) Chapter 4, with Omar 29: Thursday, July 8th (18:00-19:00 UTC)
Hi @sgugger , will this be recorded and reshare on youtube?
0
huggingface
Course
Setup questions
https://discuss.huggingface.co/t/setup-questions/6796
Use this topic for any questions related to Chapter 0 10 of the course.
Hi @sgugger, thank you for the course. Is it advisable to use a dockerized container as a coding environment on a windows machine? Or inside a WSL2 layer? Or does something or the other may create issues later? (Asking because I have faced a lot of issues with NCCL failure while trying to run XGboost et al on a RAPIDS installation on WSL2).
0
huggingface
Course
Huggingface course study group with a fast.ai bent
https://discuss.huggingface.co/t/huggingface-course-study-group-with-a-fast-ai-bent/6823
Fill out the interest form here: https://forms.gle/ZwZ6oVhsq3vSM5xf8 24 We’ll be tracking with the official course topic wise, but looking at how to use fast.ai and blurr to train and deploy models … from DataBlocks to inference and all else in between. Fill out the form and I’ll be following up with those who do with details soon.
We’ll be announcing dates end of this week. Please respond to this poll as to your preferred start time (either Saturday or Sundays): twitter.com Wayde Gilliam 1 @waydegilliam Thanks to all who filled out the @fastdotai'sh @huggingface course study group interest form! We'll be announcing dates (either Saturday or Sundays) at the end of this week with things kicking off early July. Please respond to this poll as to which time works best: 11:58 AM - 23 Jun 2021
0
huggingface
Course
Chapter 1 questions
https://discuss.huggingface.co/t/chapter-1-questions/6797
Use this topic for any question about Chapter 1 36 of the course.
Small mismatch? From the widget in roberta-large-mnli · Hugging Face 2 I see the classification is between “CONTRADICTION”, “NEUTRAL”, “ENTAILMENT” Capture d’écran de 2021-06-16 11-42-001280×584 40.8 KB
0
huggingface
Course
About collaborative translation
https://discuss.huggingface.co/t/about-collaborative-translation/6883
Everyone, first of all the huggingface course notes are very nice. Good luck to everyone. Together with our friends (@basakbuluz @ayyucekizrak 2 ) we are working voluntarily to increase Turkish open source content in the field of machine learning. We also compile these resources on our YZAI website 2 for access from one place. We can contribute primarily to the translation of Hugginface Course contents into Turkish. As a method for translation, for example, the translation repository of ML notes by Amidi brothers from Stanford 6 is very convenient and provides a nice guide. Can such an infrastructure be provided and is it possible to contribute to other languages? Thanks.
Thanks a lot for reaching out! We are in the process of open-sourcing the content of the course, which will make it easier to translate the material as well as the video subtitles! It’s probably going to take a few weeks though, but I’ll ping here when it’s done!
0
huggingface
Course
How the course relates to the tutorials
https://discuss.huggingface.co/t/how-the-course-relates-to-the-tutorials/7482
is the course meant to be a summarization and distillation of the tutorials or substantially different? either way i’m really excited about it.
hey @TheIneffableALIAS the course is complementary to the official tutorials 9 as it assumes less prior knowledge and provides “bite-sized” chunks of information through videos and simple examples. if you’re brand new to transformers (or NLP in general), my suggestion would be to start with the course and then dive into the official tutorials to see more advanced usage
0
huggingface
Course
A Quick Review of the Course - Video
https://discuss.huggingface.co/t/a-quick-review-of-the-course-video/7042
I did a quick review of the course. I look forward to see the other chapters being published. Any timeline for release? Dive into Transformers - HuggingFace Free Course [Overview]
Thanks for sharing the review! The next part of the course is scheduled for the fall.
0
huggingface
Course
Softmax vs logits
https://discuss.huggingface.co/t/softmax-vs-logits/7008
Why do we need to apply softmax after getting the logit values? I know it says that it would help to normalise the scores and get a probabilistic interpretation. But is it not that that the utility of logits/softmax scores is to determine which value is bigger and then infer the label. For example, if you get a logits score of [-4.2095, 4.6053], where -4.2095 refers to label0 and 4.6053 refers to label1. Then as 4.6053 > -4.2095, I would keep label1 as my prediction. If instead I apply softmax to the logits score, I get [1.4850e-04, 9.9985e-01]. With this softmax score I will still infer that the predicted label is label1.
If you just want to get the predicted class, you don’t need the softmax layer as, as you pointed out, you just have to take the index of the maximum logits. The softmax will convert the logits into probabilities, so you should use it when you want the probabilities for each prediction.
0
huggingface
Research
[Suggestions and Guidance]Finetuning Bert models for Next word Prediction
https://discuss.huggingface.co/t/suggestions-and-guidance-finetuning-bert-models-for-next-word-prediction/14043
Problem Statement : To produce a next word prediction model on legal text. The aim is to build an autocomplete model which will make use of existing typed text as well as a possible concatenation of vectors from prior clauses/paragraphs. Current Approach: Because Bert based model are based on masked language, pretrained models such as LegalBert did not produce good accuracy for prediction of next word when the word to be predicted was marked as [MASK]. Here is an example sentence, “use of [MASK]” where “marked” is the next word to be predicted in place of “[MASK]” token. (Note that there would not be words present after the mask token, only before the token). Currently approaching the problem as a SequenceClassification problem where labels are the token ids of the words that are to be predicted next. Will also attempt to finetune gpt2 on the legal text using run_clm.py from huggingface examples directory Is there a better way to approach this problem of next word prediction? Any suggestions and guidance would be welcome. Thank you in advance
Hi Sumanth! I believe you are already on the right track by finetuning gpt2. The difference is that GPT was trained using causal/autoregressive attention. It means that GPT is specifically trained to predict the next word without having access to the word to the right of the masked token (unlike BERT). The different models and their architectures are depicted in this chart: Capture684×642 56.9 KB Long story short - you should see better results with GPT2. Let us know how it goes. Cheers Heiko
0
huggingface
Research
Paper Notes: Deepspeed Mixture of Experts
https://discuss.huggingface.co/t/paper-notes-deepspeed-mixture-of-experts/13908
Summary The legends over at DeepSpeed released a paper 9 on scaling Mixture of Experts with a bunch of cool ideas. Since they will probably release some pytorch code soon I wanted to summarize/discuss the findings so that I learn them better. I provide 0 background on Mixture of Experts, assume knowledge of Top1 vs Top2 gating, for selfish/lazy reasons. Read the deepspeed blog post 1 for background. I abstract the term “acc” to encompass all types of metrics: validation perplexity, zero shot accuracy, etc. I used @srush trick of trying to read critically (to get your brain to think harder about other peoples’ results) but I don’t want to come off as too negative. I really enjoyed this paper and am excited to read the code! The DeepSpeed team proposes: (a) (sec 4.1) architectural modifications that reduce the number of experts without hurting acc. (b) (sec 4.1) Moe 2 Moe distillation, (instead of MoE 2 dense distillation like the FAIR paper (appendix Table 9) and the Switch paper) (c) (sec 5) Systems Optimizations to make inference fast Improved Communication Collectives for MoE Inference (hierarchical all2all) tutel style single-device kernels to make routing tokens to experts fast. 4D parallelism!? I now cover architecture and distillation, and save systems optimizations for later because I don’t fully understand them yet. Architecture: Pyramid Residual MoE This section is really well written. It contains two very nice ablations that motivated the changes: Phenomenon 1: “Pyramid” We compare the performance of two different half-MoE architectures. More specifically, we put MoE layers in the first half of the model and leave the second half’s layers identical to the dense model. We switch the MoE layers to the second half and use dense at the first half. The results show that deeper layers benefit more from large number of experts. This also saves a ton of parameters: 40% reduction at 1.3B dense equivalent size, which will be useful at inference time. Phenomenon 2: “Residual” we can achieve the benefit of using two experts per layer but still use one communication. They frame this as trying to get the benefits of top2 routing without the costs. But, basically MoeLayers become only half sparse – a dense ffn that process the input as does 1 expert – the results are added. Compared to top2 where 2 different sparse experts process the input, this is cheaper because there is less communication (you only need to send the input to 1 place instead of 2?) Note this does not improve acc compared to top2, just speed. Putting it all together: FAIR arch (see table 1) (52B Params) Layers: top2 gating (each token gets routed to 2 experts) 512 experts at each MoE layer Deepspeed Arch: (31B params) Layers: each token processed by dense FFN and 1 expert (same FLOPs as top2 gating if same number of experts, I believe). pyramid: somewhere between 32 and 128 experts at each Moe layer – way fewer params! In terms of acc, (PIQA is the only overlapping evaluation), the 31B Deepspeed performs between the FAIR 52B and the FAIR 207B and was probably lower training cost than the 52B, even before all the systems optimizations in section 5. Nice! With the systems optimizations they say training is 5x faster than dense (to the same acc). The FAIR paper says “4x faster than dense”, but measures TFLOPS, which make the extra communication required for MoE appear to be free. So all in all this definitely seems like a better architecture. It would have been cool if Tables 2,4 had training cost and inference cost next to the few shot performances (or 1 big joined table somewhere!). Staged Knowledge Distillation: Mixture Of Students (MoS) Caveat before you read this section: in most distillation results, the student model is MUCH smaller than the teacher model, like half as large or so. Here, the student model is only 12.5% smaller than the teacher model. (3 fewer layers, 4B fewer params (31B vs 27B)). They are able to lose very little performance, which is nice, but they also didn’t really lose that much weight, and it would be interesting to try to replicate what they did with smaller students. Caveat 2: name deeply misleading. It’s normal KD but they switch to cross entropy loss halfway through that’s it! Anyways, these are the first published MoE 2 MoE Distillation results. The switch paper and FAIR paper both distill Moe 2 Dense models (since they are much easier to serve than MoE models, a gap deepspeed claims to eliminate in section 5 – the one I don’t understand yet:( ). They use the same KD loss as the other papers, but they turn it off halfway through training. They say this improves acc, but I am most interested in the speed implications. I tried MoE2MoE distillation but it was extremely slow (like 10x slower than Dense2Dense) because of teacher inference every step. If we could only run the teacher forward pass for part of the student training, that would be sweet! Next Let me know any inaccuracies, important omissions, what you ate for lunch follow up ideas! Next week I will try to tackle Section 5 (Systems optimizations) and if I don’t I will burn a 20 dollar bill and record it!
What is 4D parallelism?
0
huggingface
Research
Using mixup on RoBERTa
https://discuss.huggingface.co/t/using-mixup-on-roberta/306
Hello everyone! I tried to apply the technique of data augmentation, mixup, popularly used on computer vision, but in this case on NLP. The algorithm developed is in two phases: The first phase gets the representation for each sentence of the batch, computing the mean of the correspondent hidden states of the last layer. The fragment below shows the corresponding module. class LanguageModel(nn.Module): def __init__(self, pretrained_model_name, device="cuda:0", anonymized_tokens=False): super(LanguageModel, self).__init__() # Load tokenizer self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name) # Load model self.config = AutoConfig.from_pretrained(pretrained_model_name) self.config.output_hidden_states = True self.model = AutoModel.from_pretrained(pretrained_model_name, config=self.config).to(device) def forward(self, input_ids, attention_mask): outputs = self.model( input_ids=input_ids, attention_mask=attention_mask, ) activations = torch.mean(outputs[0], axis=1) return activations After that, it generates the mixup examples using the function proposed on the original code 3, but being the input, the representations computed on the previous step, instead the images, like originally. One time the mixup examples are generated, the second phase realizes the predictions (the fragment below shows the corresponding module). Finally, is computed the loss, in the same way as in the original work. class ClassifierLayer(nn.Module): def __init__(self, num_classes, dropout_rate=0.1, petrained_size=768, device="cuda:0"): super(ClassifierLayer, self).__init__() self.layer = nn.Linear(petrained_size, num_classes, bias=True).to(device) self.drop = nn.Dropout(dropout_rate) def forward(self, z): activations = self.layer(self.drop(z)) return activations In the fragment of the code below, is shown a summary of the training loop proposed, however the full script used is here 3: for idx_epoch in range(0, args.num_train_epochs): language_model.train() classifier_layer.train() accs = 0; ps = 0; rs = 0; f1s = 0; lss = 0 for (idx_batch, train_batch) in enumerate(train_dataloader): # 0: input_ids, 1: attention_mask, 2:token_type_ids, 3: labels batch_train = tuple(data_.to(device) for data_ in train_batch) labels_train = batch_train[-1] inputs = { 'input_ids': batch_train[0], 'attention_mask': batch_train[1], } optimizer.zero_grad() # 1st phase: conextual embeddings contextual_embeddings = language_model( input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], ) # 2nd phase: mixup inputs, targets_a, targets_b, lam = mixup_data(contextual_embeddings, labels_train, args.alpha_mixup, use_cuda) inputs, targets_a, targets_b = map(Variable, (inputs, targets_a, targets_b)) predictions = classifier_layer(inputs) loss = mixup_criterion(criterion, predictions, targets_a, targets_b, lam) # 2nd phase: standard # predictions = classifier_layer(contextual_embeddings) # loss = criterion(predictions, labels_train) lss += loss loss.backward() optimizer.step() scheduler.step() Experimenting with this approach, the results obtained are very poor… Have any of you worked on an approximation similar to this one with successful/good results? Thanks.
Hi @franborjavalero! This is really interesting. I remember @sgugger got a little bump using mixup after embeddings with ULMFiT. Would be really awesome to share this code as implementation for this is not trivial.
0
huggingface
Research
ASR spell correction
https://discuss.huggingface.co/t/asr-spell-correction/5103
Because I love the mindset within the community of the Wave2vec sprint I’d like to share some ideas about improving the accuracy of asr and making more stable for production. I would be happy to discuss about. In some experiments I tested many systems and algorithms, but especially one reached amazing accuracy. When having the transcribed text from the wave2vec model we go many ways to correct them. Either dictionary search for each word an automatically use the nearest result, or an seq2seq model, but what about an hybrid solution based on two or three parts ? Part 1: token classification, to recognize which words are wrong in the context. Instead of human names or locations just classify wrong or right. Part 2: When we have the wrong tokens let’s check an dictionary for similar alternative, either using bm25 (tested) or dpr neural search (untested) Part 3: When we have some alternatives for each token we can either use the best scored result or let an multiple-choice trained model decide. In my quick tests I decided using the best alternative, but definitely need to check the multiple choice variant. With this 3 steps Token classification Dictionary search using bm25 like algorithms Replacing false tokens with best scored alternative I reached amazing results and up to WER of 1.3% At the moment my code is pretty noisy and I would like to start from zero again to build an clean library based on huggingface models, or maybe just an community notebook, depends on your feedback I’d like to hear what you think about, maybe you have much better idea ? Maybe someone is interested in joining this research ?
Amazing idea. I would love this. Do you have any code I can check out?
0
huggingface
Research
Copying mechanism for transformer
https://discuss.huggingface.co/t/copying-mechanism-for-transformer/5025
Hello. HF community members I wonder how do you think about the copying mechanism for transformer. I can see very few papers/tech reports implementing copying mechanism for transformer. aclweb.org 2020.acl-main.125.pdf 71 816.26 KB web.stanford.edu 15784595.pdf 53 256.28 KB Also, I couldn’t find anyone who discusses copying mechanism in this forum. Personally, I am stuck with computing ‘generating-copying switch’ since transformer does not have explicit ‘context vector’ in RNN. Do you have any thoughts about the lack of reference/discussion for copying mechanism? Is it worth implement & contribute to HF community with copying mechanism?
Hi, I have tried a copy mechanism in the BART model. I directly utilize the cross-attention as the attention score for the source samples. This idea is from openNMT CopyGenerator 69. My implementation is like this: def copy_mechanism_v3(self, logits, cross_attentions, decoder_hidden_states, encoder_input_ids): last_hidden_state = decoder_hidden_states[-1] last_attention_weight = cross_attentions[-1] # context_vector shape: batch_size, decoder_length, hidden_size p_copy = torch.sigmoid(self.linear_copy(last_hidden_state)) previous_word_pro = torch.softmax(logits, dim=-1) * (1 - p_copy) encoder_word_attention = p_copy * torch.mean(last_attention_weight, dim=1) # did not copy the pad mask = torch.where(encoder_input_ids == 1, encoder_word_attention.new_zeros(encoder_input_ids.shape), encoder_word_attention.new_ones(encoder_input_ids.shape)) encoder_word_attention = encoder_word_attention * mask.unsqueeze(1) personal_words = encoder_input_ids.unsqueeze(1).repeat(1, encoder_word_attention.shape[1], 1) word_pro = torch.scatter_add(previous_word_pro, 2, personal_words, encoder_word_attention) return word_pro
0
huggingface
Research
Guide: The best way to calculate the perplexity of fixed-length models
https://discuss.huggingface.co/t/guide-the-best-way-to-calculate-the-perplexity-of-fixed-length-models/193
Hey all. Just thought you might be interested in a page I just added to the research docs on the perplexity of fixed-length models 108. Perplexity (PPL) is defined as the exponential average of a sequence’s negative log likelihoods. For a t-length sequence X, this is defined, \text{PPL}(X) = \exp \left\{ -\frac{1}{t} \sum_i^t \log p_\theta (x_i|x_{<i}) \right\} But with fixed-length models (like most transformers), we can’t always condition on the entire preceding subsequence when predicting each token. The initial instinct for many in dealing with this problem is to break the whole sequence into segments equal to the model’s max input size and calculate the likelihoods of each segment independently. This not the best approach, however, since it gives the model very little context to use for prediction at the beginning of each segment. I’ll illustrate this with the following gif where we imagine a model with a max input size of 6 adding up the log-likelihoods for the sentence, “Hugging Face is a startup based in New York City and Paris” ppl_chunked1200×160 352 KB When the model starts the second segment, it has to try to predict the word “in” without any context, even though we have 5 words before it that the model could be using (since we said the max input size is 6). A better approach is to instead employ a sliding window strategy, where you continually move the context across the sequence, allowing the model to take advantage of the available context. ppl_sliding1200×160 373 KB This is slower to compute, but will typically yield better scores and is actually much closer to the way the sequence probabilities are formally decomposed (e.g. see the the equation above). In the guide 108, we show how to do this in a strided way with GPT-2. When using the first, naive approach, GPT-2 gets a PPL of 19.64 on WikiText-2. In contrast, when we use a strided sliding window, this score improves dramatically down to 16.53.
Hi, I have a question about the perplexity calculation from the guide 30. Why do we divide by i in the example, see ppl = torch.exp(torch.stack(lls).sum() / i)? If you have a codebase or paper that exemplifies this behaviour could you please share it? Thanks!
0
huggingface
Research
Text similarity not by cosine similarity
https://discuss.huggingface.co/t/text-similarity-not-by-cosine-similarity/8766
Hi all, I have a question. I have a dataset containing questions and answers from a specific domain. My goal is to find the find the X most similar questions to a query. for example: user: “What is python?” dataset questions: [“What is python?”, “What does python means?”, “Is it python?”, “Is it a python snake?”, “Is it a python?”] I tried encoding the questions to embeddings and calculate the cosine similarity but the problem is it gives me high similarity score for “Is it python?” for the query “What is python?” which is clearly not the same question meaning and for “What does python means?” get very low score compared to “Is it python?” Any suggestions how i can overcome this problem? maybe new approaches…
if cosine similarity is not giving you the results you want, you could try a different metric like euclidean / manhattan / minkowski distance or jaccard similarity. alternatively you could try changing the embedding model to see if that improves the comparisons
0
huggingface
Research
Improving performance of Wav2Vec2 fine tuning with word piece vocabulary
https://discuss.huggingface.co/t/improving-performance-of-wav2vec2-fine-tuning-with-word-piece-vocabulary/6292
Hello, I’m fine tuning XLSR-Wav2Vec2 on a 200+ hours of a speech in a language not in the original pertaining. The training progresses nicely, however when it reaches about 40 WER it starts to overfit (WER doesn’t progress much and train loss decreases while eval loss is going up). I’ve tried increasing some params of the SpecAugment, but it only helped a bit. I’ve noticed that using the Speechbrain lib implementation I’m getting a bit better results (on the expense of training stability) and was wondering if it is due to a larger vocabulary they use there. Does anyone tried to use a tokenizer with a vocabulary that contains subwords and words in addition to characters? I could’t find any experiment that uses it with Huggingface transformers W2V2. I see in the Wav2Vec 2 paper they say that: We expect performance gains by switching to a seq2seq architecture and a word piece vocabulary. https://arxiv.org/pdf/2006.11477.pdf 5 Any suggestions on how to do that with Huggingface Transformers? P.S. my dataset is noisy and not super clean. Any help or suggestion will be very helpful. Samuel
Not sure how I’d switch to a seq2seq architecture, but for word piece, I think you just need to change the vocab passed to the Wav2Vec2CTCTokenizer. Instead of the individual alphabet characters used for the vocab in the XLSR example, you’d need to use the wordpiece/BPE algorithm on your language text data and pass that through.
0
huggingface
Research
[Help needed] Extending Trainer for Meta learning
https://discuss.huggingface.co/t/help-needed-extending-trainer-for-meta-learning/635
I want to implement MAML with Glue dataset with transformers. In my case, query and support set will come from the same dataset. I’ve read some work in meta learning from HF team (Wolf et al., 18). Although I’ve implemented my training loop (with higher) (open for other methods as well), I am still looking for a correct reference implementation of MAML or Reptile to confirm. Currently my code inherits from Trainer. If anyone share a sample snippet that would perform MAML gradient updates, that’d be really helpful ?
So the MetaDataset 13 wraps any GlueDataset to give a list containing all classes when meta_dataset[0] is called. So this will become, num_of_classes (N) way K shot example. I’ve written this, which extends Trainer for MAML. def train(self): self.create_optimizer_and_scheduler( int( len(self.train_dataloader) // self.args.gradient_accumulation_steps * self.args.num_train_epochs ) ) logger.info("***** Running training *****") self.global_step = 0 self.epoch = 0 eval_step = [2 ** i for i in range(1, 20)] inner_optimizer = torch.optim.SGD( self.model.parameters(), lr=self.args.step_size ) self.model.train() tqdm_iterator = tqdm(self.train_dataloader, desc="Batch Index") # n_inner_iter = 5 self.optimizer.zero_grad() query_dataloader = iter(self.train_dataloader) for batch_idx, meta_batch in enumerate(tqdm_iterator): target_batch = next(query_dataloader) outer_loss = 0.0 # Loop through all classes for inputs, target_inputs in zip(meta_batch, target_batch): for k, v in inputs.items(): inputs[k] = v.to(self.args.device) target_inputs[k] = v.to(self.args.device) with higher.innerloop_ctx( self.model, inner_optimizer, copy_initial_weights=False ) as (fmodel, diffopt): inner_loss = fmodel(**inputs)[0] diffopt.step(inner_loss) outer_loss += fmodel(**target_inputs)[0] self.global_step += 1 self.optimizer.step() outer_loss.backward() if (batch_idx + 1) % self.args.gradient_accumulation_steps == 0: torch.nn.utils.clip_grad_norm_( self.model.parameters(), self.args.max_grad_norm ) # Run evaluation on task list if self.global_step in eval_step: output = self.prediction_loop(self.eval_dataloader, description = "Evaluation") self.log(output.metrics) output_dir = os.path.join( self.args.output_dir, f"{PREFIX_CHECKPOINT_DIR}-{self.global_step}", ) self.save_model(output_dir)
0
huggingface
Research
Why are huge batch sizes used for pretraining and small ones for finetuning?
https://discuss.huggingface.co/t/why-are-huge-batch-sizes-used-for-pretraining-and-small-ones-for-finetuning/10836
In most, if not all papers on language models, I find that they often use very large batch sizes for pretraining on a language modeling task. But when they then finetune their model to show its performance on downstream tasks, the batch sizes are suddenly very small. For instance, the RoBERTa paper shows that its batch size during pretraining was 8k sentences (Table 9 in the appendix), however for finetuning the batches are considerably smaller (Table 10, appendix): 16 (RACE), 48 (SQuAD), 16, 32 (GLUE). This has puzzled me since forever and I have never discovered the rationale behind this. Is it a matter of scale? Something like: while pretraining you have so much different data, that you just want as much in one go as you can - it does not matter as much that the loss is smoothed out (averaged) over such huge batches. But when finetuning over a smaller dataset you do not want to average the loss over too much of the dataset at once because you then lose peculiarities of samples quickly. Or is there another reason? All ideas are welcome.
I don’t think they use the same hardware for pretraining and fine-tuning. E.g. multiple TPU pods or a GPU cluster for pretraining allows a big batch size but that’s maybe something the research team can only do once. Fine-tuning, and something more accessible (just one GPU for instance) then requires a smaller batch size to avoid the OOM. This is just a guess however.
0
huggingface
Research
ELECTRA training reimplementation and discussion
https://discuss.huggingface.co/t/electra-training-reimplementation-and-discussion/1004
After months of development and debugging, I finally successfully train a model from scratch and replicate the official results. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators 115 by Kevin Clark. Minh-Thang Luong. Quoc V. Le. Christopher D. Manning Code: electra_pytorch 249 AFAIK, the closest reimplementation to the original one, taking care of many easily overlooked details (described below). AFAIK, the only one successfully validate itself by replicating the results in the paper. Comes with jupyter notebooks, which you can explore the code and inspect the processed data. You don’t need to download and preprocess anything by yourself, all you need is running the training script. Replicated Results I pretrain ELECTRA-small from scratch and have successfully replicated the paper’s results on GLUE. Model CoLA SST MRPC STS QQP MNLI QNLI RTE Avg. of Avg. ELECTRA-Small-OWT 56.8 88.3 87.4 86.8 88.3 78.9 87.9 68.5 80.36 ELECTRA-Small-OWT (my) 58.72 88.03 86.04 86.16 88.63 80.4 87.45 67.46 80.36 Table 1: Results on GLUE dev set. The official result comes from expected results 3. Scores are the average scores finetuned from the same checkpoint. (See this issue 3) My result comes from pretraining a model from scratch and thens taking average from 10 finetuning runs for each task. Both results are trained on OpenWebText corpus Model CoLA SST MRPC STS QQP MNLI QNLI RTE Avg. ELECTRA-Small++ 55.6 91.1 84.9 84.6 88.0 81.6 88.3 6.36 79.7 ELECTRA-Small++ (my) 54.8 91.6 84.6 84.2 88.5 82 89 64.7 79.92 Table 2: Results on GLUE test set. My result finetunes the pretrained checkpoint loaded from huggingface. Official training loss curve My training loss curve Table 3: Both are small models trained on OpenWebText. The official one is from here 3. You should take the value of training loss with a grain of salt since it doesn’t reflect the performance of downstream tasks. More results How stable is ELECTRA pretraining? Mean Std Max Min #models 81.38 0.57 82.23 80.42 14 Tabel 4: Statistics of GLUE devset results for small models. Every model is pretrained from scratch with different seeds and finetuned for 10 random runs for each GLUE task. Score of a model is the average of the best of 10 for each task. (The process is as same as the one described in the paper) As we can see, although ELECTRA is mocking adeversarial training, it has a good training stability. How stable is ELECTRA finetuing on GLUE ? Model CoLA SST MRPC STS QQP MNLI QNLI RTE ELECTRA-Small-OWT (my) 1.30 0.49 0.7 0.29 0.1 0.15 0.33 1.93 Table 5: Standard deviation for each task. This is the same model as Table 1, which finetunes 10 runs for each task. Advanced details (Skip it if you want) elow lists the details of the original implementation 8/paper that are easy to be overlooked and I have taken care of. I found these details are indispensable to successfully replicate the results of the paper. Optimization Using Adam optimizer without bias correction (bias correction is default for Adam optimizer in Pytorch and fastai) There is a bug of decaying learning rates through layers in the official implementation , so that when finetuing, lr decays more than the stated in the paper. See _get_layer_lrs 12. Also see this issue 7. Using clip gradient using 0 weight decay when finetuning on GLUE It didn’t do warmup and then do linear decay but do them together, which means the learning rate warmups and decays at the same time during the warming up phase. See here 11 Data processing For pretraing data preprocessing, it concatenates and truncates setences to fit the max length, and stops concating when it comes to the end of a document. For pretraing data preprocessing, it by chance splits the text into sentence A and sentence B, and also by chance changes the max length For finetuning data preprocessing, it follow BERT’s way to truncate the longest one of sentence A and B to fit the max length Trick For MRPC and STS tasks, it augments training data by add the same training data but with swapped sentence A and B. This is called “double_unordered” in the official implementation. It didn’t mask sentence like BERT, within the mask probability (15% or other value) of tokens, a token has 85% chance to be replaced with [MASK] and 15% remains the same but no chance to be replaced with a random token. Tying parameter Input and output word embeddings of generator, and input word embeddings of discriminator. The three are tied together. It tie not only word/pos/token type embeddings but also layer norm in the embedding layers of both generator and discriminator. Other The output layer is initialized by Tensorflow v1’s default initialization (i.e. xavier uniform) Using gumbel softmax to sample generations from geneartor as input of discriminator It use a dropout and a linear layer in the output layer for GLUE finetuning, not what ElectraClassificationHead uses. All public model of ELECTRA checkpoints are actually ++ model. See this issue 8 It downscales generator by hidden_size, number of attention heads, and intermediate size, but not number of layers. Need your help Please consider help us on the problems listed below, or tag someone else you think might help. Haven’t success to replicate results of WNLI trick for ELECTRA-Large described in the paper. When I finetune on GLUE (using finetune.py), GPU-util is only about 30-40%. I suspect the reason to be small batch and model size (forward pass only takes 1ms) or slow cpu speed ? About more The updates of this reimplementation and other tools I created will be tweeted on my Twitter Richard Wang 21. Also my personal research based on ELECTRA is underway, hope I can share some good results on Twitter then.
This is awesome !
0
huggingface
Research
Online/streaming speech recognition
https://discuss.huggingface.co/t/online-streaming-speech-recognition/4456
Are there plans to implement online decoding for the speech recognition models such as wav2vec2 and XLSR? More specifically, to be able to receive audio in short chunks, and output partial transcripts as they become available. Motivation Many use cases are covered by the current wav2vec2 model in the library, involving batch recognition of pre-recorded text. However for an online application that wanted to continuously recognize speech on a live input stream, this may not be sufficient.
I would very much like to know whether this is possible too! Have you gotten any further on this, @arkadyark?
0
huggingface
Research
Significance of the [CLS] token
https://discuss.huggingface.co/t/significance-of-the-cls-token/3180
Hi, I’ve been using the HuggingFace library for quite sometime now. I go by the tutorials, swap the tutorial data with my project data and get very good results. I wanted to dig into a little bit deeper into how the classification happens by BERT and BERT-based models. I’m not able to understand a key significant feature - the [CLS] token which is responsible for the actual classification. I hope smart people here could answer my questions because I’m unable to find them on my own. When I searched for what the [CLS] token actually represent, most of the results indicate that “it is an aggregate representation of the sequence”. I can understand this part. Basically before BERT, people have used different techniques to represent documents ranging from averaging the word vectors of the document to computing document vectors using doc2vec. I can also understand that stacking a linear classification and feeding in the values for the [CLS] token (768 dim for a bert-base-uncased model), we can end up classifying the sequence. Here are my questions: Is my above understanding of the [CLS] token correct? Why is it always the first token? Why not the second, third or last? Did the authors of the original BERT paper get it to be the first token by trial and error? How exactly does it “learn” the representation of the sequence? I mean its basically trained in the same way as the other input tokens in the sequence, so what makes it special to represent the entire sequence? I couldn’t find any explanation to this question from either the paper or my search afterwards. Is it at all possible to get back the original sequence using the [CLS] token (I think not but worth asking)? I hope I can find some answers to these questions (or at least pointers to resources where I can find them). Please let me know if this is not correct place to post these questions and where I should post them. Thank you.
I would love to hear from others!
0
huggingface
Research
Implementing a custom Attention Transformer
https://discuss.huggingface.co/t/implementing-a-custom-attention-transformer/9702
Hello everyone, currently I am trying to implement a custom attention transformer, whose attention is given on Page No. 4 of this link 12. They have used hugging face for the implementation, and I am not sure about how to go for approaching this problem, and how to use hugging face to implement custom attention. Can anybody guide me, about how to go about implementing this? Thanks,
Hey @iakarshu my best guess is that the authors implemented DocFormer from scratch, so as far as I can tell you can’t do some clever subclassing of an existing model to tweak the attention layers. Having said that, you could look at the implementation of LayoutLMV2 4 which seems to share a similar approach and you can use this template 4 to get all the basic modeling files. Do you know if AWS open-sourced the pretrained weights of DocFormer? Without them, you might need a lot of compute to build a useful model. Hope that helps!
0
huggingface
Research
Citing/Crediting Language Models
https://discuss.huggingface.co/t/citing-crediting-language-models/8877
Hello. How is it customary to cite / credit a language model from huggingface in an academical paper, when the model does not have a publication of itself? Any examples? Thanks!
Hi @Secret, for now you can use the model’s URL (see How can I use BibTeX to cite a web page? - TeX - LaTeX Stack Exchange 34), and we are working with @lysandre and others on plugging a https://www.doi.org/ 3 system into the platform Let us know if this helps
0
huggingface
Research
The (hidden) meaning behind the embedding of the padding token?
https://discuss.huggingface.co/t/the-hidden-meaning-behind-the-embedding-of-the-padding-token/3212
So noticed that the transformers contain different embeddings for PAD tokens, and I know pad tokens typically are simply ignored for the most part (if at all present). However, as a forward pass using a batch typically contain dozens of padding tokens it would be interesting to see if these in fact hold any meaningful information (as padding tokens do attend to the sequence). Does anyone know of any research which has been conducted on what information might be present here? One might legitimately ask why this is relevant isn’t padding tokens simply a convenience for efficient processing because we need the same tensor shape? This is naturally correct, but quite a few studies have clustered the sentence embedding and it seems relevant to ask what influence the padding embeddings have on this. For a short demonstration that they indeed have different embeddings: import transformers tokenizer = transformers.AutoTokenizer.from_pretrained( "bert-base-uncased") model = transformers.BertModel.from_pretrained( "bert-base-uncased") input_ = tokenizer(["this is a sample sentence"], return_tensors="pt", # add some padding padding="max_length", max_length=128, truncation=True) output = model(**input_) # extract padding token embedding pad_tok_id = [i for i, t in enumerate(input_["input_ids"][0]) if t == 0] embedding_pad1 = output[0][0][pad_tok_id[0]] embedding_pad2 = output[0][0][pad_tok_id[1]] embedding_pad1.shape #embedding size embedding_pad1[0:10] embedding_pad2[0:10] tensor([-0.5072, -0.4916, -0.1021, -0.1485, -0.4096, 0.0536, -0.1111, 0.0525, -0.0748, -0.4794], grad_fn=<SliceBackward>) tensor([-0.6447, -0.5780, -0.1062, -0.1869, -0.3671, 0.0763, -0.0486, 0.0202, -0.1334, -0.5716], grad_fn=<SliceBackward>)
@KennethEnevoldsen I have been thinking about the same a while ago. You have a point with different embeddings for pad tokens. But, to my understanding these never interfere with any part of model’s computation (like, self attention), since the pad tokens are always masked using the attention masks. Would you have an example of where the pad token embeddings could make a difference, given the attention mask?
0
huggingface
Research
Language model to search an answer in a huge collection of (unrelated) paragraphs
https://discuss.huggingface.co/t/language-model-to-search-an-answer-in-a-huge-collection-of-unrelated-paragraphs/2210
I want to build a question/answer language model to search a large collection of paragraphs. Say 10k paragraphs. And find relevant answers in them. There are 2 issues I don’t know how to solve. existing solutions often identify an answer from a short paragraph. I don’t know how to deal with a lot of paragraphs. A naive approach would be going through each paragraph and identify an answer in each of them. existing solutions will generate an answer even when fed with an unrelated paragraph. they don’t give a confidence number. If I have 10k paragraphs to search an answer from, and only 3 paragraphs have an answer, using existing solutions won’t let me to rule out unrelated paragraphs. Is there a way to generate a document embedding first (using both a question and a paragraph ), and I can use the embedding to find candidate paragraphs first and then do the actual answer search. And when there is no answer, I’d like to get a confidence number that 's below my answer threshold. Are there any papers dealing with this problem?
DPR & RAG may be the references you want. Regarding your questions and my answers with DPR huggingface.co DPR — transformers 3.5.0 documentation 5 DPR (retriever module) select top-k paragraphs from 20 million of possible wikipedia paragraphs (not just 10k, and you can also make your own corpus) using very fast MIPS (maximum inner product search) implemented by FAISS DPR (reader module) produce a relevance score for each of the top-k passages so this is a confidence number that you mentioned Finally, RAG is an improvement of DPR where (1) you can combine different passages directly (both relevance and irrelevance) to produce the final answer by “marginalization” and (2) Final answer is generated in free-form, not necessarily contained in any of the passage . (Please see the paper for details https://huggingface.co/transformers/model_doc/rag.html 12 )
0
huggingface
Research
Seq2Seq Distillation: Methodology Questions
https://discuss.huggingface.co/t/seq2seq-distillation-methodology-questions/1270
This thread should be used to ask questions about how examples/seq2seq/distillation.py works, and to ask questions about the associated paper after it gets released.
What is the reasoning behind choosing alternating layers ? no teacher distillation scores for XSUM ? no teacher is working for non seq-2-seq task as well as we saw with MNLI, should we also see if it works other tasks as well ?
0
huggingface
Research
Finetuning for fp16 compatibility
https://discuss.huggingface.co/t/finetuning-for-fp16-compatibility/977
t5 and pegasus don’t really work in fp16 because they create activations that overflow fp16 bits. (they were trained in bfloat 16 which has larger range) Has anyone read/seen/heard anything about finetuning/scaling models so that their activations can fit in fp16. (or generally to encourage smaller magnitude activations? I tried one experiment on google/pegasus-xsum where I finetune with summarization lm loss and add some additional losses based on the magnitude of hidden states, but I haven’t weighted them (the model instantly forgets how to summarize) so I’m looking around.
It’s been a long time since this post, but maybe you remember if the problem with fp16 will appear when training the models from scratch (pretraining)? I’ve seen some NaNs already while training with fp16 on, but after lowering the learning rate, beginning of training looks reasonable.
0
huggingface
Research
What can transformers learn without position encoding?
https://discuss.huggingface.co/t/what-can-transformers-learn-without-position-encoding/6554
So it obviously makes sense that attention mechanisms don’t have any inherent sense of position without encoding it explicitly, and for sequence prediction this seems critical. But, for example, word2vec via CBOW or skip gram is able to learn word embeddings without explicit position encoding. So my question is basically if we train a BERT model without the position encoding on the Masked LM task (something very similar to word2vec it seems to me), what is BERT capable of learning if anything? Would it be better than word2vec for creating word embeddings?
My intuition would be that the transformers would still have a notion of context. It would still know this word appear in context with those other words, but would lose the notion of order loosely associated with position embeddings. Also, it would still allow word embeddings to change depending on the other words in context. So it would still be better than word2vec, which only has one embedding by word (learned as a combination of several contexts).
0
huggingface
Research
Project Description
https://discuss.huggingface.co/t/project-description/6444
Hi @Mads your project looks very interesting, would you mind adding a description? huggingface.co Mads/wav2vec2-xlsr-large-53-kor-financial-engineering · Hugging Face 4
Hi Snow, thank you for your interest! I will update in the coming week as soon as possible!
0
huggingface
Research
PEGASUS model overfitting
https://discuss.huggingface.co/t/pegasus-model-overfitting/6246
Hey everyone, I would like to see any scientific evidence regarding model overfitting available for the PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization model. If anyone can point me to some resources or provide an answer, I’d greatly appreciate it Thanks and stay safe
het @theprincedrip i don’t know the answer off the top of my head, but one place to start would be to check out the citations of the pegasus paper, e.g. via Google Scholar 1
0
huggingface
Research
Classification Heads in BERT and DistilBERT for Sequence Classification
https://discuss.huggingface.co/t/classification-heads-in-bert-and-distilbert-for-sequence-classification/6146
Hi, I have been using BertForSequenceClassification and DistilBertForSequenceClassification recently and I have noticed that they have different classification heads. BertForSequenceClassification has a dropout layer and a linear layer, whereas DistilBertForSequenceClassification has two linear layers and a dropout layer. Is there a particular reason for this? Thanks in advance!
All in all, they have the same head: BertForSequenceClassification has a dropout layer and a linear layer but uses the pooler output, which went through a linear layer inside the BertModel. DistilBertModel has no pooler output however, so the first linear layer is there to replicate that.
0
huggingface
Research
Collaborative Training Experiment of an Albert Model for Bengali
https://discuss.huggingface.co/t/collaborative-training-experiment-of-an-albert-model-for-bengali/5991
Huggingface is launching a collaborative training experiment of an Albert Model for Bengali language with our community. We are actively looking for participants who will help us to train the model. So what do you need in order to participate- A Google Colab account That’s everything you need. [Although if you want to use the power of your own GPUs, Huggingface will also provide a script for that.] How you can contribute? If you are a native Bengali speaker, that would be a great help, we are looking for participants who will check the performance of the tokenizer, sentence splitter, etc. You might want to help us preprocessing the dataset. We are using the Wikidump and OSCAR Bengali dataset to train the model, if you have some suggestions on preprocessing these feel free to contribute in that part. Now the main part, distributive training. You have been provided a google colab script in order to start the training and if your kernel crashes just restart the training script. (Non native speakers can participate) Join our discord community link - https://discord.gg/GD9G4j8fJU 43 [A separate slack channel from Huggingface will be provided where you will get to know more about the distributive training framework and other related things.] We are aiming to start this collaborative training experiment from - May 7th Please do participate in this first Huggingface collaborative training experiment specifically the native bengali speakers.
Also I forgot to mention the main thing. Thanks to Yandex for creating this collaborative distributive training strategy. Without them this huge community training event would not be possible.
0
huggingface
Research
Multi-GPU Machine Setup Guide and QnA
https://discuss.huggingface.co/t/multi-gpu-machine-setup-guide-and-qna/5891
This is a WIKI post - so if you feel you can contribute please answer a few questions, improve upon existing answers or add an alternative answer or add new questions: This thread is to discuss Multi-GPU machine setup for ML. Basic Recommendations Q. What are basic recommendations on how to design a multi-GPU machine? Would be great to factor in price vs performance (so we can know how much we save vs pre-built)? A. See the links to the guides in the Resources sections below. Critical decisions to make Q. What are the smartest decisions to make it future proof (mine is already obsolete)? A. Computers are black holes that suck everything in and give little out (other than some RGB colors). There is no such thing as future proofing in modern computers, other than mechanical parts like your PC tower. Q. Can we do it at all or is it necessary to redesign it every 1-2 years? Ideally you just upgrade parts as they need upgrading, rather than replacing the whole PC. I use a 10-year old tower still. In-house vs. cloud Q. Is it worth building a good local machine or should you just learn how to leverage the cloud? A. Typically, for small set ups - up to several consumer GPUs, it’s almost always worth to have a local setup than cloud, unless you find some upstart cloud provider that for a while underprices their cost-per-hour. Pros: Of course, it depends on your usage patterns. If you are going to use it once in a blue moon, cloud it is. If you use it a lot then local will be cheaper. You can calculate your costs to purchase the machine vs. renting it. Not needing to worry about forgetting to turn the instance off and having the $$ counter running might be another plus. Heat is good. Heat is bad. In cold countries a home-based ML server is a great adjunct to keeping your working space warm. Not so much if you live in tropics. Cons: If you want a lot of large GPUs you might not be able to build it on consumer-level hardware, or the cost might be prohibitively expensive. Electricity cost is another factor. Some cities have very expensive electricity. Especially if you go over the “normal” usage quota that some electric companies have. Hardware gets outdated fast, so your needs may quickly become larger than what you have. You may or may not be able to recover some of the investment when trying to sell your old hardware. Key components Q .What are the main components to look for? Q. Sample setups would be great too (and why they are great). A. Make sure your CPU has enough PCIe lanes to support all the cards you plan to use Make sure your MB has enough PCIe slots and they are at the right distance to support modern GPUs that take up 2 slots. Research your PSU - so that it has enough extra power to handle those power-hungry GPUs Plan to have a lot of RAM, so ideally buy as large of a single RAM stick as possible. i.e. try not to fill out all RAM slots from the get going unless you buy some 256GB from the get going. NVMe slot or a few are going to be super-important. Try to have your OS on a different drive (e.g. SSD) - you don’t want to share your data NVMe with your OS operations. Does the box have enough space for cooling? Be it water cooling or lots of fans. Definitely don’t buy those pre-packaged PCs by large retailers, you can’t mod those. Buy your own components and plan for expansion. Puchase Timing Q. Is it a good time to buy GPU or when to know when there are good deals (seem a bit high right now)? A. Black Friday in North America gives you by far the best deals. But don’t just buy because it’s BF, do your research, since some companies raise their prices, instead of lowering those. Resources Lecture 6 from Full Stack Deep Learning 6 A 15000$ Machine Learning Rig: 2x3090 + 1xA6000 Build 5 Blogs focusing on ML Hardware: The Best 4-GPU Deep Learning Rig only costs $7000 not $11,000 14 Tim Dettmers’ great posts about choosing GPUs for deep learning 4 and Hardware Guide to Deep Learning 2. The guides do not focus on distributed setup, but there are suggestions on multi-GPU machines and how to select a GPU for your task and budget.
I would recommend to check out Tim Dettmers’ great posts about choosing GPUs for deep learning 15 and Hardware Guide to Deep Learning 6. The guides do not focus on distributed setup, but there are suggestions on multiGPU machines and how to select a GPU for your task and budget.
0
huggingface
Research
Masked Language Model Scoring
https://discuss.huggingface.co/t/masked-language-model-scoring/5541
Is there an implementation of the Psuedo Log Likelihood for bidirectional language models (i.e. Salazar et al. Masked Language Model Scoring 14) in transformers? The github repo in the linked paper uses transformers 3.3 and I’ve been unable to get it to work for 4.5.
what kind of problems are you running into? presumably it’s due to a change in the API, so sharing what steps you’re taking and the error messages will help with the debugging
0
huggingface
Research
`nan` training loss but eval loss does improve over time
https://discuss.huggingface.co/t/nan-training-loss-but-eval-loss-does-improve-over-time/4521
I’ve been playing around with the XLSR-53 fine-tuning functionality but I keep getting nan training loss. Audio files I’m using are: Down-sampled to 16kHz Set to one channel only Vary in length between 4 to 10s I’ve set the following hyper-params: attention_dropout=0.1 hidden_dropout=0.1 feat_proj_dropout=0.0 mask_time_prob=0.05 layerdrop=0.1 learning rate: on a warmup schedule to 3e-4 for 3 epochs at 5e-4 for 3 epochs back to 3e-4 Sadly, I’m fine-tuning the model on an unpublished corpus, so I am probably not at liberty to upload it here which might hinder reproducibility efforts greatly. Here’s what the loss and WER progression looks like: image497×815 75.2 KB Anyone know what could be happening here? The model seems to be training just fine and some testing proves that the model performs well on the language I’m training it on. So what’s up with the training loss? Pinging @patrickvonplaten and @valhalla as this might be relevant to them.
Hey @jjdv, I’m sorry without a google colab it will be difficult to debug this for us. Given that your WER seems to decrease nicely - there might just be a problem at displaying the values…let’s see whether other people encounter the same problem
0
huggingface
Research
XLSR-53: To group tokens or not to group tokens
https://discuss.huggingface.co/t/xlsr-53-to-group-tokens-or-not-to-group-tokens/4522
In @patrickvonplaten 's Fine Tuning XLSR-53 notebook, he mention how tokens shall not be grouped when computing metrics, in the case of that notebook, the WER metric. And that does make sense. However, later on in the notebook, he goes on to use the processor to decode the predictions and doesn’t pass the group_tokens=False argument to the method. Shouldn’t the way we decode to compute metrics and to output predictions be the same? Which way would be the correct one? This is probably a minor issue for languages that don’t duplicate graphemes that often, but I’m curious as it could impact the perceived performance one way or another. Could someone clarify this for me?
Hey @jjdv, Could you check whether this issue answers your question: wav2vec2: `convert_tokens_to_string` contracts legitimately repeated characters · Issue #10619 · huggingface/transformers · GitHub 14?
0
huggingface
Research
Dealing with Imbalanced Datasets?
https://discuss.huggingface.co/t/dealing-with-imbalanced-datasets/4328
Hi everyone, I am dealing with a binary classification task (non-English language) of relatively long documents (~4k words on average). I have tested a Logistic Regression trained on simplistic BoW features, yielding reasonable performance. I am now testing the multilingual BERT, with two linear layers on top of it and using the Cross-Entropy loss; however, its performance is quite low. The “annoying” part is that on a given test set, BERT always predicts the majority class. It is worth saying that the dataset (both train and test) is rather imbalanced (80/20). I have tried the following without any luck: a) Play around with the learning rate, class weighting, num of linear layers & associated configurations. b) Select different parts of the document as input to BERT. c) Generate balanced samples (incl. oversampling the minority class). I have also tried generating a synthetic toy dataset of 1K examples from one document belonging to one class and another 1K examples from one document belonging belonging to the other class - the performance was perfect, as expected. Is there something obvious that I am missing in terms of debugging my model? Is the problem the imbalanced nature of the dataset I am working with? Could a Focal loss (or anything else) help on this end?
Hi @aguarius, my naive guess is that the length of your documents is the source of the low performance since BERT has a maximum context size of 512 tokens which is only a handful of paragraphs. One somewhat hacky approach to this could be to chunk your document into smaller passages, extract the hidden states per passage and then average them as features for your linear layers. What language(s) are in your corpus? That might be another source of difficulty since mBERT is not great on all of its languages and perhaps you can work with a better model like XLM-RoBERTa (or even a monolingual one if that’s possible)
0
huggingface
Research
How does BERT actually answer questions?
https://discuss.huggingface.co/t/how-does-bert-actually-answer-questions/4287
have been trying to understand how the BERT model works. Specifically, I am trying to understand how it picks up up answers to questions on a given passage. I have tried following this 3blog post and whilst It has given me a nice intuition, I would like to better understand what is happening under the hood. From my understanding, the question and paragraph are tokenised separately and then go through the transformer model. Then, the dot product between the ‘transformed’ tokens and a START/END token is taken, with the higher result giving you that start and Finnish of the answer. What I would like to understand, what happens to the tokens in this “transformation” (i.e feedforward through the model) that makes it possible to take a dot product and therefore indicate if a word is a START/END.
Hi @theudster, you can find a detailed tutorial on question-answering with transformers here: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb 20
0
huggingface
Research
Hugging Face Reads - 01/2021 - Sparsity and Pruning
https://discuss.huggingface.co/t/hugging-face-reads-01-2021-sparsity-and-pruning/3144
Hugging Face Reads January 2021 - Sparsity and Pruning By Victor Sanh 14, François Lagunas 5, and Yacine Jernite 4 Introduction to the Hugging Face Reads (HFR) series New year, new Hugging Face reading group ! We are launching the Hugging Face Reads (HFR) series: each month, we will choose a topic to focus on, reading a set of four papers recently published on the subject. We will then write a short blog post summarizing their findings and the common trends between them, questions we had for follow-up work after reading them, and how recent advances in the area have affected our work at HF. The first topic for January 2021 is Sparsity and Pruning, and we are planning to address Long-Range Attention in Transformers next month. Enjoy, and come join the conversation here! Introduction While large-scale pre-trained language models help solve an ever-growing set of natural language processing tasks, the progressive increase in their sizes raises concerns about their wide-scale applicability in practical settings, especially on devices with limited memory and computing power. Sparse neural network models which only use a fraction of the large parameter sets of their dense equivalents offer a promising avenue to reduce these computational demands. Recent works have proposed various methods to achieve impressive levels of sparsity, whether by gradually choosing which parameters to retain during training or by “pruning” the parameter set after the fact. This post presents an overview of four papers proposing or analyzing such methods. We review the following works: the (Chen et al., NeurIPS 2020) 32 paper investigating the applicability of the Lottery Ticket Hypothesis to BERT-style models, the (Frankle et al., 2020) 19 analysis of currently available methods to find sparsity patterns at initialization before doing any training, the (Li et al., 2020) 19 work on the computational and performance trade-offs between training a large model to prune later vs. training smaller models right away, and the (Hooker et al., 2020) 18 study of the biases introduced by current methods used to compress models (including pruning). Paper summaries For each paper, we identify some of the claims and contributions, as well as some follow-up questions. The Lottery Ticket Hypothesis for Pre-trained BERT Networks 32 By Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, Michael Carbin The Lottery Ticket Hypothesis 6 (LTH) was initially developed and tested on computer vision systems. It states that given an initialization of a model, it is possible to find a subset of sufficient parameters during training: i.e., such that training only those parameters while setting the others to zero allows the model to reach the same performance as training the full model. Unfortunately, this subset can only be found after some amount of computation 3, and the method requires several iterations of re-training (either from scratch or from an earlier checkpoint, a method known as rewinding) and pruning for full effect. However, the approach can still end up improving training time and outputs a ready-to-use sparse model. This paper sets out to validate the LTH in NLP (and in particular in BERT-style models). Specifically, it asks whether sparse subnetworks of a model pre-trained with Masked Language Modeling (MLM) are sufficient to solve down-stream tasks. The answer is broadly positive. Findings Using a pre-trained initialization, BERT contains sparse subnetworks at non-trivial sparsities that can be fine-tuned in isolation to full performance on a range of downstream tasks. As opposed to previous work, these subnetworks are found at pre-trained initialization and not at random initialization (which was the case with the original LTH work). Rewinding does not significantly improve accuracy on downstream tasks. There are universal subnetworks that transfer to all studied downstream tasks. By further fine-tuning on the same task that was used for pre-training (Masked Language Modeling), the method finds a 70% sparse sub-network that can yield good results on all downstream applications. Follow-up questions In practice, the computational cost of fine-tuning is already much less than that of pre-training. How would “fine-pruning” (pruning while fine-tuning with methods such as movement pruning) a model on a downstream task compare to using the LTH-sparse model obtained with MLM (or with the downstream task)? The lack of impact of rewinding is in stark contrast with previous work on networks initialized from scratch and bears closer examination. For example, does this finding hold across fine-tuning learning rates? How much does the value of the selected parameters change over time? Pruning Neural Networks at Initialization: Why are We Missing the Mark? 19 By Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, Michael Carbin This paper analyzes the performance of several methods to prune networks at initialization, so even before training starts, to save on training time as the network is smaller or sparse (SNIP, GraSP, SynFlow, Magnitude pruning). The methods are allowed to sample the dataset to perform the pruning: this sampling can be considered negligible compared to the computation required for training. They compare the methods to two “upper bounds” representing the performance we can hope to achieve when given access to information that is available after training: Magnitude Pruning and Lottery Ticket Rewinding. Findings All proposed methods are better than random pruning, but they are not sensitive to the individual selection weights, only to pruning proportions on each layer. Even worse, selecting the weights with the lowest instead of the highest value of the utility criteria improves performance on some methods (GraSP), which appears to invalidate some of the original works’ claims. The methods are far from competitive with post-training approaches. Moreover, none of the methods is SOTA in all settings: some methods are better at some sparsity levels than others, but this depends on sparsity. The methods yield better results if they are applied after a few training steps rather than right away at initialization, but they need a significant amount of training to approach the proposed “upper bounds”. Follow-up questions The problem of finding a “good” subnetwork right at initialization seems somewhat under-defined and possibly overly difficult: which task or set of tasks is used to measure success? Is it even possible to find an ideal sub-networks that works on any possible task a priori? Consequently, it is hard to tell whether the mixed results stem from flaws in the methods or from the task’s inherent difficulty. More insights here would be particularly enlightening. The authors note that the studied methods prune “layers, not weights”, which may explain the surprising results they obtain by inverting the weight selection. In that case, would a dense model with adaptive layer sizes following the same patterns work as well? An interesting follow-up direction could be something along the lines of “pruning as soon as possible”. Recent “Bertology” work 5 has shown that pre-trained models learn different levels of skill in sequence: we are particularly eager to see follow up work that explores the relationship between the emergence of these skills and the optimal pruning strategy. Train Large then Compress, Rethinking Model Size for Efficient Training and Inference of Transformers 19 By Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, Joseph E. Gonzalez This paper explores the landscape of the computational tradeoffs between transformer models’ sizes and the required numbers of hyper-parameter settings and training steps to achieve a good performance. It finds larger sizes can allow for fewer hyper-parameter settings and training steps and offers some practical advice about choosing a larger initial number of parameters that can later be pruned to, counter-intuitively, reduce the overall computational cost of training a mode when compared to just training a smaller model from scratch. Findings Large models are faster to train: they reach a given precision faster when measuring optimizing steps/wall clock time/ flops, even when they are stopped before convergence. Absolute size is more important than depth or width alone, but depth can be more important than width in some cases. The faster convergence usually makes up for the faster execution of smaller models. Large models can be compressed to smaller networks. Training large networks might speed up training but would lead to problems at inference time, as their resource cost is much higher. This work finds that pruning them to networks that end up containing fewer parameters than the original smaller alternatives still yields higher performance. They can be quantized too with less quantization error. Batch size has an influence on training speed. In practice, this means that gradient accumulation should be used for larger models. Follow-up questions The results are impressive, but it can still be difficult to get some intuition for why the larger models converge to a better state faster and are easier to prune. The authors mention previous work hinting that deeper networks “promote movement along directions already taken” as a possible explanation, but we are definitely looking forward to reading further analysis. The connection to Lottery Ticket Hypothesis is mentioned only in passing. Further work exploring whether the sub-networks selected by the two approaches are similar in any fashion (such as by considering the Jaccard distance between the sets). Characterizing Bias in Compressed Models 18 By Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, Emily Denton This paper sheds light on the impact of pruning on neural models for vision and shows that reported top-line accuracies often hide the disproportionate negative impact on certain classes of inputs. The paper connects this phenomenon to bias and fairness considerations. Findings While the overall error is largely unchanged when a model is compressed (by pruning and quantization), there is a set of data that bears a disproportionately high portion of the error, with their accuracy falling by up to 50% while the overall performance only decreases by 1%, regardless of what the original accuracy was on the group. These examples (or at least some of them) can be consistently identified by comparing the predictions from a population of compressed models with the predictions from a population of non-compressed models on the same inputs: the examples where the predictions distributions diverge are called Compressed Identified Examples (CIE). CIE often correspond to low-frequency patterns in the inputs. Compression cannibalizes performance on low-frequency patterns in order to optimize the performance on higher-frequency patterns and preserve the overall accuracy. Compression thus amplifies biases of models (amplifying certain errors on certain types of inputs). The authors suggest using CIE as an auditing tool for compressed models: surfacing a tractable subset of the data for further inspection by domain experts to assess this issue. Follow-up questions This paper studies are pruning and quantization techniques that are run after training. One question that remains open is whether the models are facing an issue of modeling capacity (i.e., less-biased predictions require more representation power) or whether it is tied to the training procedure. Analyzing methods that reduce model size in the course of training or approaches such as gradual pruning 4 would shed some light on this question. Would up-weighting the CIE examples in training lead to models that are more robust to compression? Or would we expect to find different CIE groups? The authors suggest using CIE as a diagnostic tool. What can be done with the diagnostic? Are there other calls to action from these insights? For instance, how could we change existing benchmarks on compression to include robustness metrics (i.e., adding another component to the tradeoff size vs. accuracy on CIE groups)? Reading Group Discussion The quantitative results obtained on many of the common benchmark tasks by pruning are impressive. At the same time, they also remind us how much we still have to learn about the training dynamics of neural networks. Common wisdom states that “overparameterization helps with optimization”, but we have little theory available to help us understand the phenomenon further, especially in the deep attention-based models that perform so well in NLP. Each of the four papers above offers a different view of this question of modeling capacity vs. optimization vs. generalization. The Lottery Ticket Hypothesis relies on the quality of the initial state of the parameters at least as much as on the evolution of the weight values during optimization. As such, the main purpose of increasing the number of parameters would be to exponentially increase the chances of hitting a good sub-network at initialization. Other approaches focus more on how and whether the gradient flowing through the possibly redundant parameters help optimize the value of the ones we want to keep in the final pruned network: whether they try to evaluate that impact a priori as in the SynFlow algorithm or are content to simply keep them around for optimization based on their empirically proven efficiency and to prune them at the end of the training. All of the works outlined above, however, assume that the neural networks are indeed over-parameterized and that they can be pruned without changing their qualitative behavior. The CIE work questions that assumption and finds that pruning does change the behavior of the model in non-trivial ways. This assessment also agrees with some experiments Victor Sanh 14 has run on the task for natural language inference, gradually pruning a model trained on multiNLI and testing it on the HANS dataset. As the sparsity increases, the generalization as measured by the accuracy on the HANS test set decreases and gradually drops to 0 while the performance on the multiNLI test set stays mostly constant. Another experiment along those lines would be to see how much factual knowledge pre-trained language models lose as they are pruned (for example by monitoring closed-book QA accuracy for a model like T5). The question remains whether this loss of generalization and increased bias is a result of the model losing “expressive capacity” as its number of parameters decreases or whether the fault lies in the compression strategies that aren’t quite flexible enough, but the results certainly suggest that a large number of parameters serves as more than a crutch for optimization. Another question that is somewhat orthogonal to the one above is that of when to optimally prune weights from the model. Pruning early saves computation, but does not benefit from any signal from the target task. Pruning after training can take advantage of additional information but does not save any computation at training time or allow the parameters to adapt to the new sparsity pattern. Gradually pruning during training seems to provide the best of both worlds, but introduces a new set of hyper-parameters which may make optimization more costly. One should also keep in mind that actual computational gains will depend on the capabilities of current hardware and their ability to take full advantage of shifting sparsity patterns. We’re very much looking forward to the progress on all of these questions that 2021 is sure to bring! @HuggingFace : Sparsity and Pruning We first started investigating ways to make existing models more computationally efficient with DistilBERT 8, a method which was used to train one of our most popular models 2. The follow-up on sequence-to-sequence models yielded DistilBart 3, which also reaches similar performances to their larger counterparts at a lesser cost. Recently, we have also investigated approaches which focus on sparsity more specifically. Movement Pruning Most of the works referenced above use magnitude pruning, a widely used strategy for pruning which thresholds weight values and simply sets the smallest ones to zero. In our work on Movement Pruning 14 led by Victor Sanh 14, we argue that this approach is less effective in the context of transfer learning and highlight the importance of considering the changes of weights during fine-tuning as opposed to relying (mostly) on the pre-trained values. Code & hyper-parameters are available here. 11 Block Movement Pruning The main drawback of unstructured pruning from a practical point of view is that current hardware can make it quite difficult to take full advantage of the sparsity pattern to accelerate the computation of the network. A compromise that can help alleviate this issue is the use of “semi-structured” sparsity patterns. By selecting blocks (typically 32x32) of weights instead of single weights, and running the same kind of optimization methods. Accelerating block sparse linear algebra is easier, and the pytorch_block_sparse 14 library developed at Hugging Face is our attempt to show that. We are pretty confident more and more solutions for block-sparsity computation will emerge, and we will be working with major actors to enable it. We are already providing some sample networks 7 that take advantage of block sparsity, so you can judge by yourself! Finally, we also work to combine block sparsity with other accelerated sparsity patterns such as NVidia Ampere, to further decrease the memory, the compute and the energy used by the neural networks that will be everywhere in the near future.
Hi @VictorSanh I noticed that your implementation of movement pruning involves some masked versions of BERT like MaskedBertForSequenceClassification. Do you know whether these classes will become part of the main library at some point in the future?
0
huggingface
Research
FDA Label Document Embedding
https://discuss.huggingface.co/t/fda-label-document-embedding/3654
Hi everyone, I am looking for any ideas or advice that you guys may have obtained in similar situations. I have been working on an NLP task to cluster medical documents for some time, and whilst I am eager to use transformers to get the best results, through all my efforts it seems that TF-IDF has worked best. I am working with the SIDER side effect dataset, which provides annotated FDA medication labels, an example is here: http://sideeffects.embl.de/media/pdf/fda/17106s032lbl/annotated.html#C0026961_0 2 I have tried TF-IDF and SciBert through sentence transformers, selecting the most relevant passages, but no amazing results yet. Does anyone have any ideas or previous experience? Many Thanks, Chris
Hi @FL33TW00D, I ran into a similar problem last year with TF-IDF and found the following approach gave better results: Encode the documents, either with your favourite Transformer or Universal Sentence Encoder (the latter works really well!) Run UMAP 3 on the embeddings to perform dimensionality reduction Cluster with HDBSCAN 4 HTH!
0
huggingface
Research
Why are embedding / pooler layers excluded from pruning comparisons?
https://discuss.huggingface.co/t/why-are-embedding-pooler-layers-excluded-from-pruning-comparisons/3580
Hi @VictorSanh, In your Saving PruneBERT notebook 5 I noticed that you only save the encoder and head when comparing the effects of pruning / quantisation. For example, here you save the original dense model as follows: # Saving the original (encoder + classifier) in the standard torch.save format dense_st = {name: param for name, param in model.state_dict().items() if "embedding" not in name and "pooler" not in name} torch.save(dense_st, 'dbg/dense_squad.pt',) dense_mb_size = os.path.getsize("dbg/dense_squad.pt") My question is: why are the embedding and pooled layers excluded from the size comparison between the BERT-base model and its pruned / quantised counterpart? Naively, I would have thought that if I care about the amount of storage my model requires, then I would include all layers in the size calculation. Thanks!
Hey! The QA model actually only needs the qa-head, the pooler is just decorative (it’s not even trained). Start and end of spans are predicted directly from the sequence of hidden state. This explains why I am not saving the pooler. As for the embedding, I’m just fine-pruning the encoder, and the embedding modules stay fixed at their pre-trained values. So I am mostly interested in comparing the compression ratio of the encoder (since the rest is fixed). Hope that makes sense.
0
huggingface
Research
Debugging the RAG question encoder
https://discuss.huggingface.co/t/debugging-the-rag-question-encoder/3550
Hi- Thank you again for the awesome library & work. I have been trying to repurpose the RAG code to train on the KILT dataset. As I understand, during the training phase, document encoder (and the index) is fixed, only the query encoder and the generator are fine-tuned. As I train multiple epochs, something curios happens where the question encoder ‘collapses’ into emitting identical predictions regardless of the input. Specifically, out1 and out2 are identical, even though input embeddings are different. emb2 = torch.randn([1, 512, 768]) emb3 = torch.zeros([1, 512, 768]) # encoder out1 = model.rag.question_encoder.question_encoder.bert_model.encoder(emb2) out2 = model.rag.question_encoder.question_encoder.bert_model.encoder(emb3) The way this behavior manifests itself is that the question encoder starts pulling the same wiki entries regardless of the question. In fact, the last hidden states are identical for each token in the sequence. Screenshot from 2021-02-08 21-20-17823×215 26.1 KB I am curious if this type of behavior rings any bells? One hunch I have is whether mixed-precision training might be the cause. Any direction / feedback will be greatly appreciated, before I take the plunge and dig any further. Thank you! Deniz
Hi ! There’s some discussion about that at Retrieval Collapse when fine-tuning RAG · Issue #9405 · huggingface/transformers · GitHub 12 Apparently it can happen in some setups
0
huggingface
Research
Question about maximum number of tokens
https://discuss.huggingface.co/t/question-about-maximum-number-of-tokens/3544
Hi, It is my understanding that all the pretrained models have a fixed maximum number of tokens (512 for bert-base-uncased). Suppose I have texts, that when tokenized exceed that number (like fictional text running through many paragraphs). I feel that there could be a better way than just using the first 512 tokens of the text. I could increase that limit, but my understanding is that for me to do that I have to train model from scratch and not be able to use the pretrained model. I would like to use the pretrained model. In order to achieve this I have an idea and need some feedback on that: Split the text into a list of sentences using a sentence Sentence Boundary Disambiguation tool. Tokenize each sentence using the model’s corresponding tokenizer. Create our new text, by keep the first and last n sentences from the list and then taking a random subset of the rest of the sentences, such that all the tokens add up to 512. This will not restrict the input to only the first 512 tokens and will include random sentences from the middle of the text. Any thoughts on this approach?
Sure, that is an option. You can also first run the text through a summarizer model and use the output as the input for your classifying model. There is no one “right” approach. You can try different things and see what works best for you.
0
huggingface
Research
Science Tuesday: MARGE
https://discuss.huggingface.co/t/science-tuesday-marge/685
For this science Tuesday, I read Marge, and wrote up a brief summary, as well as some interesting questions to discuss @joeddav @srush @VictorSanh @thomwolf @clem @julien-c @teven @patrickvonplaten @yjernite (only allowed 10 tags) Pre-training via Paraphrasing (MARGE) Paper 31: published June 26 2020 Authors are from Facebook AI Research: Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, Luke Zettlemoyer. Summary Huge models trained with masked-lm pretraining objective, or similar, memorize lots of facts in their parameters and don’t use an external storage to look up facts they are missing. Human brains have separate systems (it seems 11) for memorizing facts and generating language, and often google things. In this spirit, goal of many transformer+retriever models is to decompose memorization of facts and language understanding. MARGE stands for a Multi-lingual Autoencoder that Retrieves and GEnerates. The pretraining setup: reconstruct original document by retrieving related documents (from wiki) and trying to regenerate the original maximize likelihood of original doc conditional on retrieved docs, relevance scores. This implicitly forces the retriever to learn how to generate good relevance scores. There are some tricks related to not scoring all of wikipedia for every example while keeping relevant articles in each batch. Every 10k training steps, they remake their batches by computing the cosine similarity of every pair of docs, and then greedily adding source and target docs to batches such that the pairwise sum of cosine similarities increases the most. This obviously seems hacky, but allows them to get away without approximate NN or some other expensive way to find related docs. This, and the fact that a randomly initialized encoder will give docs with lexical overlap higher than random cosine similarity, allows the model to train from random. The retrieval model, ideally, can focus on getting the transformer all the facts that it needs while the transformer learns to paraphrase, which requires generating fluent language. For finetuning/inference, you don’t need to use the retrieval part. Marge performs…: comparably to XLM-Roberta, with 20% of the pretraining compute. comparably to mbart on de-en, en-zh translation SOTA on ml-sum, a cross lingual summarization task Key contributions: (1) Most of the related work is not multilingual (2) most of the related work does not zero-shot well? (3) this pretraining objective unifies learning to retrieve and learning to generate. Previous work requires two pretraining stages. Related Work Realm: “At a high level, the method goes like this: find the most similar text passages in BERT space, add those passages to the input as additional context, and then make a prediction.” -Joe a few weeks ago 8 different because the retriever has to be pretrained separately. Realm also seems to use mostly open domain QA benchmarks. RAG (Retrieval-Augmented Generation) Different because mostly focused on knowledge intensive benchmarks. MARGE can also do well on translation. Starts with bart-large + DPR, whereas MARGE pretrains end-to-end. Questions somebody could answer: Does MARGE outperform Bart on english only benchmarks like GLUE/ xsum summarization? Why did they only show multilingual benchmarks? When will there be code? How long does a forward pass take? What are the consequences of not using retrieval during inference. Does the model not “know” anything? Higher Level: Is Translation “knowledge intensive”? How could we measure hallucinations? Authors suggest that we should use a pre-training that is as close as possible to the dowstream task. Pegasus paper also suggests this. Where else could this idea be applied? Also these two talks are good: https://slideslive.com/38929793/beyond-bert 38 (Mike Lewis at ACL) https://www.youtube.com/watch?v=KTQPWoQ7Ol8 33 (Luke Zettlemoyer at AKCD)
From Mike Lewis, the 1st author: We didn’t try very hard, but from what I saw MARGE lags a little behind BART on monolingual English tasks. It’s not too surprising, because I think having to be a good multilingual model just dilutes the capacity a bit. Similarly, XLM-R isn’t quite at RoBERTa level. code coming soon they also retrieve from CC-News, not just wikipedia. “We’re going to look at retrieval during inference, but haven’t run that yet. Qualitatively, I think it’s a bit less prone to hallucination than BART because it (somewhat) knows that it doesn’t know anything. That means we get surprisingly literal zero-shot translations, because it tends not to make too much stuff up.”
0
huggingface
Research
Model or Dataset available for classifying a grammatical sentence?
https://discuss.huggingface.co/t/model-or-dataset-available-for-classifying-a-grammatical-sentence/3423
I want to be able to classify if an input text is a complete sentence or not. The closest accurate definition of ‘being complete’ is if the sentence is a grammatical sentence. Also ‘being complete’ sentence, can depend on the context of the sentence but I want to focus on a sentence-like text as input for now. Example of a complete sentence: “You can write using one of the following styles” “You can write” “He writes code” Example of an incomplete sentence: “You can write using” “You can write using one” “He writes code for” I found this package for grammar checking which I am going to try: PyPI language-tool-python 15 Checks grammar using LanguageTool. I am wondering if there is an ML/DL solution for this problem. Is there a dataset or available model for this that you know?
Hi @emadg , I don’t think language tools is best way to go here, because language tool will just check grammer and grammer does not predict wheather this sentence is complete or not. Here are the rules that are implemented in Language tool, you can check if there are any rules which will help you to classify a sentence as complete or not. https://community.languagetool.org/rule/list?sort=category&order=asc 19 For ML Approach, I think you can try using a Language model, You can trying looking for things which end a sentence and find probaibily of those words like (Punctuations, conjuction words) in the end. Lesser probability means very less chances that sentence will end there. PS: I will add If I found a concreate method to solve this.
0
huggingface
Research
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
https://discuss.huggingface.co/t/switch-transformers-scaling-to-trillion-parameter-models-with-simple-and-efficient-sparsity/3137
Interesting new paper from Google improving upon T5. arXiv.org Switch Transformers: Scaling to Trillion Parameter Models with Simple and... 23 In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers...
Just to add to the previous post… Google Brain recently unveiled a language model of 1.6 trillion (1.6E+12) parameters with performance equal to or better than the SOTA on several NLP tasks. It surpasses the 175 billion (1.75E+11) parameters of GPT-3. The mastodon was made possible by the development of a new attention-based architecture (switch transform) that divides training data and parameters between a multitude of sub-models or mix of experts connected by trainable gating. Despite its gigantic size, this text-to-text model would have been 7 times faster to train on the C4 (Colossal Clean Crawled Corpus, 750 GB) using the same amount of computation. The original article: https://bit.ly/2LQzsmJ 19, the source code: http://bit.ly/390j0ZY 46
0
huggingface
Research
Classification problem difficulty when going from 3 classes to 5 classes?
https://discuss.huggingface.co/t/classification-problem-difficulty-when-going-from-3-classes-to-5-classes/3037
This question is conceptual in nature. Suppose I’m working on a text classification problem where I have 3 labels. To make the problem more concrete, let’s say I’m working on sentiment analysis with ground-truth labels positive, neutral, and negative. I am measuring accuracy and macro-F1. Now I’d like to make another data set with 5 ground-truth labels: very positive, positive, neutral, negative, and very negative. Intuitively, I would think that the 5-label classification problem is more difficult than the 3-label problem, but the only “proof” I can think of is that a random guess is correct only 1/5 of the time with 5 labels but a random guess is correct 1/3 of the time with 3 labels. Is there a more formal machine learning argument for why a 5-label problem is more difficult than 3-label? How about an N-label problem to an M-label problem where M > N? I’m willing to brush up on Vapnik–Chervonenkis theory if that’s needed (hopefully not).
Any help, intuition, hints, pointers, or references would be appreciated.
0
huggingface
Research
Text to Text Transformer - T5
https://discuss.huggingface.co/t/text-to-text-transformer-t5/3008
Hello, I am trying to understand how T5 sentencepiece impacts custom data set. I know T5 does not use lossless training(mT5 does) but unsure of what impact it may have on any custom tokens in my dataset. Can someone please chime in if you have some insight ? Thanks
What do you mean by “lossless” training?
0
huggingface
Research
Don’t Stop Pretraining BART
https://discuss.huggingface.co/t/dont-stop-pretraining-bart/2986
Hi, I would like to try the approach suggested in “Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks” (link 11) for BART. I have my own dataset but there are 2 things that are still unclear to me. I believe I should start with BartForConditionalGeneration , as that is the LM model. is that right? Can anyone provide more details on the noising algorithm that was used to train the model? The paper is pretty vague about it, as these are the only details I found A number of text spans are sampled, with span lengths drawn from a Poisson distribution(λ = 3) We mask 30% of tokens in each document, and permute all sentences.
Hi @Erpa Yes, BartForConditionalGeneration is the LM model. Currently seq2seq pre-training examples are not available in transformers. FairSeq has the implementation of Bart denoising dataset, so that might help, You can find it here 33
0
huggingface
Research
Pre-training with Lamb optimizer
https://discuss.huggingface.co/t/pre-training-with-lamb-optimizer/1647
Hello everyone, Has anyone experimented with Lamb optimizers in HF? I tried using https://github.com/cybertronai/pytorch-lamb 39 but I was only marginally able to increase batch size and the training loss curve was rather flat. If you’ve used lamb would you please share some tips. How did you initliaze it? I am not sure what to use in optimizer_grouped_parameters list of dictionaries that wrap model parameters. Also, I’ve seen some other people use a different lr scheduler with Lamb. Thanks in advance.
Hi vblagoje, I am new to transformer. I have been playing the hugging face model for several month and I think I am thinking to made a some small changes on the Bert Model and pretrain it from scratch. I saw you discussing on another post several days ago about the pretraining process. I was wondering if you know the pretraining repository made by Nvidia? GitHub NVIDIA/DeepLearningExamples 20 Deep Learning Examples. Contribute to NVIDIA/DeepLearningExamples development by creating an account on GitHub. I think they implemented the lamb optimizer, NSP objective and wrote code to better utilized multiple gpu during distributed training. I still haven’t use it yet because I have some trouble with installing docker on the remote machine I am working on. I was just wondering if you already seen this repository or tried it, or if you have any advice on pretraining bert from scratch?
0
huggingface
Research
About the encoder and generator used in the RAG model
https://discuss.huggingface.co/t/about-the-encoder-and-generator-used-in-the-rag-model/2959
Hi, I have questions about the Rag model. In this paper, the query encoder is DPR and the generator is Bart. My questions are: Is the generator a full Bart or just the decoder part of the Bart. If I implement a Rag with the encoder part of Bart as the query encoder, and decoder part of the Bart as generator. Does that make sense w.r.t the Rag concept? I think this is more intuitive to me. why they use a ‘heterogeneous’ setting? Thanks.
Hi, generator is Bart encoder-decoder. If you have a rag model, you can access it by model.generator RAG’s question-encoder is not the same as RAG’s generator’s encoder … This really may be confusing, so let me try to explain question encoder is for encoding “question” to retrieve “documents” (or so-called “contexts”) from retriever. Then, retriever will concatenate “contexts” with “question” ; this concatenated texts are the new input. This new input will be encoded by Bart’s encoder to generate answer via Bart’s decoder Hope this helps!
0
huggingface
Research
Using transformers (BERT, RoBERTa) without embedding layer
https://discuss.huggingface.co/t/using-transformers-bert-roberta-without-embedding-layer/2807
I’m looking to train a RoBERTa model on protein sequences, which is in many ways similar to normal nlp training, but in others quite different. In the language of proteins, I have 20 characters instead of the normal 26 characters used in english (it is 26 right? :D), so that is rather similar. The big difference is that you don’t really combine the characters in proteins to actual words, but rather just keep each character as a distinct token or class. Hence essentially my input to the transformer model could just be a list of numbers ranging from 0-19. However that would mean that my input would only have 1 feature if I did that, and I’m not sure a transformer could work with that? I’m thinking of just doing a onehot encoding of these characters, which would give me 20 input features. However this is of course still very low in comparison to how normal transformers are trained, where d_model is somewhere in the range of 128-512 if I understand correctly. Does anyone have any experience with anything like this? any good advice for how it is most likely to work?
Hey, I’d recommend taking a look at this repo: https://github.com/agemagician/CodeTrans 87 by @agemagician . This repo uses transformer models for protein sequences if I understand it correctly. Also, taking a look at those models: huggingface.co Rostlab (Rostlab) 25 might help. Not sure if there is a notebook on doing protein sequence LM, maybe @agemagician has a good pointer by chance
0
huggingface
Research
What are some recommended pretrained models for extracting semantic feature on single sentence?
https://discuss.huggingface.co/t/what-are-some-recommended-pretrained-models-for-extracting-semantic-feature-on-single-sentence/2698
Hi, I am more a CV guy and recently get interested in doing a nlp project. In this project, one part might involve extracting sentence-level semantic representation from a pretrained model. In computer vision, one standard way to extract feature of an image or a video snippet could be using Resnet pretrained on Imagenet or I3D pretrained on Kinetics datasets, respectively. I want to do the similar thing but in nlp domain. I wonder if there are some recommended models pretrained on specific dataset for me to try? As far as my limited understanding, models trained on datasets which aim to to tell if two sentences are semantically equal could be a direction (e.g. QQP, STS-B ). But it needs a pair of sentences, my case is just feeding one sentence (or one block of sentences), not in a pair format. Any suggestion? Thanks!
Hi! IMO, Bert could be comparable to ResNet as the baseline. (you can use last_hidden_state variable of BertModel just like the global-pooled features of ResNet) Then, newer models like Roberta and many more could be comparable to EfficientNet etc.
0
huggingface
Research
BORT: Optimal Subarchitecture Extraction for BERT
https://discuss.huggingface.co/t/bort-optimal-subarchitecture-extraction-for-bert/2562
Hi guys, Wondering if anyone has read the new paper from the Alexa team regarding BERT size reduction. arXiv.org Optimal Subarchitecture Extraction For BERT 8 We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as "Bort", is... GitHub alexa/bort 8 Repository for the paper "Optimal Subarchitecture Extraction for BERT" - alexa/bort If anyone has any thoughts on it or would like to discuss please comment here. Thanks
Super interesting, thanks for sharing!! Perhaps @VictorSanh can give us the best comments Wondering if the same technique can be efficiently used for the giant models like T5-11B and GPT-3
0
huggingface
Research
EMNLP Picks from the Hugging Face Science Team
https://discuss.huggingface.co/t/emnlp-picks-from-the-hugging-face-science-team/2424
The Hugging Face team had a great time attending EMNLP the other week. Virtual conferences are tricky, but I personally have come to enjoy some aspects of it like the pre-recorded presentations and gather.town mingling. And not having to travel is a plus, too Last week a few of us on the science team tried to each select 4-5 presentations we’d recommend others on the team to check out. I’ve compiled our suggestions and included them here for those of you that are interested in our picks & very brief comments. Included are suggestions from myself, @VictorSanh, @yjernite, and @canwenxu (including a couple repeats). There was an incredible amount of high-caliber work and we couldn’t share all but a few that we thought our team might be interested in, so free to respond with any suggestions (or comments) of your own! Victor’s picks (@VictorSanh) BLEU might be Guilty but References are not Innocent Paper: https://arxiv.org/abs/2004.06063 22 Presentation: https://slideslive.com/38938647 17 Discuss a new reference generation method for calculating more reliable automatic scores (including BLEU) that correlate better with human judgement. + a dataset of references (included in sacrebleu i believe) Learning from Task Descriptions Paper: https://www.aclweb.org/anthology/2020.emnlp-main.105.pdf 37 Presentation: https://slideslive.com/38939344 15 Introduce a new dataset for structured task-oriented evaluation on unseen tasks (0-shot settings) conditioned on a description of the task in natural language. (nice discussion, less convinced by the dataset itself) Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually) Paper: https://www.aclweb.org/anthology/2020.emnlp-main.16/ 21 Presentation: https://slideslive.com/38939219 9 Model can learn to represent linguistic features with little pretraining data, but require orders of magniutde more data to learn to prefer linguistic generalization over surface ones (but it is slow…) Reformulating Unsupervised Style Transfer as Paraphrase Generation Paper: https://www.aclweb.org/anthology/2020.emnlp-main.55/ 33 Presentation: https://slideslive.com/38938942 15 Propose simple method based on fine-tuning pretrained language models on automatially generated paraphrase data + discusses weaknesses in automatic metrics of style transfer + release of 15M dataset of style transferthe 5th one: I found the talk of Emmanuel Dupoux at Conll very informative Yacine’s picks (@yjernite) ETC: Encoding Long and Structured Inputs in Transformers Paper: https://www.aclweb.org/anthology/2020.emnlp-main.19 16 Presentation: https://slideslive.com/38938951/etc-encoding-long-and-structured-inputs-in-transformers 4 Has local attention and a one global attention token per sentence which is trained with a contrastive loss similar to ICT. A* Beam Search Presentation: https://slideslive.com/38939414/bestfirst-beam-search 21 A* algorithm is not quite as easy to batch as regular beam search, but leads to better and more diverse n-best. F2-Softmax: Diversifying Neural Text Generation via Frequency Factorized Softmax Paper: https://www.aclweb.org/anthology/2020.emnlp-main.737/ 13 Presentation: https://slideslive.com/38938686 6 Pretty simple idea: groups tokens into bins of equal probability mass for a hierarchical softmax so the model can focus on choosing between candidates with the same prior. Leads to a nice improvement on human evaluation and generation diversity metrics. Towards Reasonably-Sized Character-Level Transformer NMT by Finetuning Subword Systems Comments: https://www.aclweb.org/anthology/2020.emnlp-main.203 9 Presentation: https://slideslive.com/38938871 5 Pre-trains on BPE and fine-tunes on full character decomposition to get the model to train faster. Towards Debiasing NLU Models from Unknown Biases Paper: https://www.aclweb.org/anthology/2020.emnlp-main.613 21 Presentation: https://slideslive.com/38938901 4 Related to @VictorSanh’s recent paper: the “biases” tend to show up in easy-to-learn examples, so the model down-weight examples that are classified correctly early in training. Canwen’s picks (@canwenxu) Experience Grounds Language Paper: https://www.aclweb.org/anthology/2020.emnlp-main.703.pdf 51 Presentation: https://slideslive.com/38938907 25 This may be the paper that defines the future direction of NLP. What should a model learn and what ability should a model have? You can find a good guess from this paper. Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting Paper: https://www.aclweb.org/anthology/2020.emnlp-main.634.pdf 21 Presentation: https://slideslive.com/38938976 2 Yes we know that fine-tuning a pretrained language model can bring the problem of forgetting. Mixout 10 is a valid solution but this EMNLP paper proposes an easy-to-use optimizer to resolve the problem. Do sequence-to-sequence VAEs learn global features of sentences? Paper: https://www.aclweb.org/anthology/2020.emnlp-main.350.pdf 12 Presentation: https://slideslive.com/38939119 5 It’s a little surprising to see this title cuz we all thought of course VAEs do. However, through well-designed experiments, the authors reveal the other side of this claim. Pre-Training Transformers as Energy-Based Cloze Models Paper: https://www.aclweb.org/anthology/2020.emnlp-main.20.pdf 14 Presentation: https://slideslive.com/38939095 4 It’s a really cool idea and it makes sense mathematically. Though the results are modest, there’re definitely more to explore. BERT-of-Theseus: Compressing BERT by Progressive Module Replacing Paper: https://www.aclweb.org/anthology/2020.emnlp-main.633.pdf 16 Presentation: https://slideslive.com/38938938 5 Self-promoting. It’s a really neat idea that you can compress a model by simply replacing their components. No additional loss function needed. My picks Learning from Task Descriptions Paper : https://www.aclweb.org/anthology/2020.emnlp-main.105.pdf 37 Presentation : https://slideslive.com/38939344 15 @VictorSanh mentioned this one but I want to include it as well. They create a new dataset trying to generalize from one set of tasks to another using only task descriptions w/o training data. It’s an ambitious idea to try to formalize and evaluate but I appreciated the work. I’m actually taking a break from adding their dataset “zest” to Datasets to compile this post, so it should be up very soon. Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start Paper: https://www.aclweb.org/anthology/2020.emnlp-main.660 9 Presentation: https://slideslive.com/38939094 6 Another approach to “universal” NLP w/ cross-task generalization. The idea here is to pose various tasks as one task (natural language inference) enabling transferability between tasks. Incidentally, the first author is the same who introduced the NLI-based zero-shot 4 classification approach which is roughly the same as the one we now use in our zero-shot pipeline & API 3. Text Classification Using Label Names Only: A Language Model Self-Training Approach Paper: https://www.aclweb.org/anthology/2020.emnlp-main.724 29 Presentation: https://slideslive.com/38938946 20 Similar to the “zero-shot” setup of Schick et al. 3's PET and Yin et al. 3's entailment-based approach (though they refer to it as “weak supervision” here). A nice difference from previous work is that they create groups of synonyms to a class label which can be used as a class representation instead of the class name alone. Another demonstration of self-training with unlabeled data only working well for classification. Experience Grounds Language Paper: https://www.aclweb.org/anthology/2020.emnlp-main.703.pdf 51 Presentation: https://slideslive.com/38938907 25 Really nice kinda philosophical paper about computational understanding of language. They lay out different “world scopes” to help think about different levels of understanding/experience. Reminiscent in some ways of Bender & Koller’s ACL paper this year, “Climbing towards NLU” 8 and their superintelligent octopus.
Especially like the linguistic shout-outs in there like Warstad et al. It’s always nice to see authors go back and see what (generativist) linguist theory has been saying for perhaps over sixty years and find ways to link that with how LMs “learn” grammar. I’ll be having some time off soon, can’t wait to catch up with all these latest developments! Thanks for the distillation, (pardon the pun)!
0
huggingface
Research
Adding learnable coefficients for multi-objective losses?
https://discuss.huggingface.co/t/adding-learnable-coefficients-for-multi-objective-losses/2191
I am running a multi-objective problem where I compute three losses and then sum them up. For each loss, I want to have a learnable coefficient (alpha, beta, and gamma, respectively) that will be optimized. optimizer = AdamW(model.parameters(), lr=2e-5, eps=1e-8) for batch in dl: optimizer.zero_grad() result = model(batch) loss1 = loss_fn_1(result) loss2 = loss_fn_2(result) loss3 = loss_fn_3(result) # How to optimize alpha, beta, and gamma? loss = alpha*loss1 + beta*loss2 + gamma*loss3 loss.backward() optimizer.step() Specific questions: Should I even have coefficients alpha, beta, and gamma? The optimizer will minimize, so they’ll all go to 0.0, right? If having those coefficients is a good idea, how can I prevent them from going to 0.0? Someone told me to use regularization, but what does that mean in this case? How do I declare alpha, beta, and gamma to be learnable by AdamW?
Yes Theoretically, we have to make a constraint like alpha+beta+gamma = 1. To change this to unconstrained optimization, we have to use Lagrange multiplier to the constraint equation, and that will be the regularization formula your friend talked about e.g. you put lambda1*alpha, lambda2*beta and lambda3*gamma into loss function. I believe it complicates the problem even more since finding optimum values of lambdas are difficult even theoretically. 2.5 Sorry not answer you Q3, but I think the practical way is to treat alpha, beta and gamma as hyperparameters and simply optimize them via grid search. In this case, simply split some of your training set to validation set, and define the metric on it. The “validation metric” has to be specified to be suitable to your problem (e.g. error, f1, spearman or any others) — you can get some ideas on metrics by finding some Kaggle competitions that is similar to your problem and see their metrics. Select hyperparaeters that optimize your validation metric.
0
huggingface
Research
Is there an easy way to apply layer-wise decaying learning rate in huggingface trainer for RobertaMaskedForLM?
https://discuss.huggingface.co/t/is-there-an-easy-way-to-apply-layer-wise-decaying-learning-rate-in-huggingface-trainer-for-robertamaskedforlm/1599
I am pre-training RobertaMaskedForLM on my own custom dataset. I wanted to implement the layer-wise learning rate decay given in https://github.com/aws-health-ai/multi_domain_lm#learning-rate-control 12 corresponding to the paper - An Empirical Investigation Towards Efficient Multi-Domain Language Model Pre-training 12. Is there an easy way to incorporate this decay of learning rate with layer depth towards input using transformers.Trainer?
I have the same question
0
huggingface
Research
Pre-Train BERT (from scratch)
https://discuss.huggingface.co/t/pre-train-bert-from-scratch/1245
BERT has been trained on MLM and NSP objective. I wanted to train BERT with/without NSP objective (with NSP in case suggested approach is different). I haven’t performed pre-training in full sense before. Can you please share how to obtain the data (crawl and tokenization details which were used) on which BERT was trained on ?. Since it takes a lot of time, I am looking for well tested code that can yield the BERT with/without NSP in one go. Any suggestions will be helpful. I know about some projects like these 59, but they won’t integrate well with transformers well I guess which is a must have condition in my case.
BERT was trained on book corpus and english wikipedia both of which are available in dataset library huggingface.co Hugging Face – On a mission to solve NLP, one commit at a time. 123 huggingface.co Hugging Face – On a mission to solve NLP, one commit at a time. 63 Transformers has recently included dataset for for next sent prediction which you could use github.com huggingface/transformers/blob/master/src/transformers/data/datasets/language_modeling.py#L258 120 i += 1 # go to next line return examples def __len__(self): return len(self.examples) def __getitem__(self, i) -> Dict[str, torch.tensor]: return self.examples[i] class TextDatasetForNextSentencePrediction(Dataset): """ This will be superseded by a framework-agnostic approach soon. """ def __init__( self, tokenizer: PreTrainedTokenizer, file_path: str, block_size: int, and there’s also NSP head for BERT github.com huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L560 29 class BertOnlyMLMHead(nn.Module): def __init__(self, config): super().__init__() self.predictions = BertLMPredictionHead(config) def forward(self, sequence_output): prediction_scores = self.predictions(sequence_output) return prediction_scores class BertOnlyNSPHead(nn.Module): def __init__(self, config): super().__init__() self.seq_relationship = nn.Linear(config.hidden_size, 2) def forward(self, pooled_output): seq_relationship_score = self.seq_relationship(pooled_output) return seq_relationship_score class BertPreTrainingHeads(nn.Module): EDIT: BertForPreTraining class can be used for both MLM and NSP with the current example/languae-modeling I guess it’s only possible to either use MLM or NSP, you might need to write your own script to combine these.
0
huggingface
Research
What are some popular datasets for domain adaptation in NLP
https://discuss.huggingface.co/t/what-are-some-popular-datasets-for-domain-adaptation-in-nlp/1931
Hello, Having some experience in domain adaptation in CV but no NLP. Can someone recommend some popular datasets in NLP for DA? and even better for me if there is any in the hugginface datasets. Thanks!
cc @yjernite maybe here (and Angie which should be also on the forum by the way!)
0
huggingface
Research
Carrying Gradients Through Generate
https://discuss.huggingface.co/t/carrying-gradients-through-generate/301
Hi folks, How would you best recommend that I pass gradients through generate? below is a rough code snippet explaining the objective. I am thinking that I could take the hypo_ids directly from the model output (instead of from generate), but this is no longer natural because teacher-forcing is used to generate these. Thoughts? Context from Pytorch Lightning Implementation: # self.model = BartForConditionalGeneration("facebook/bart-base") def forward(self, batch, batch_id): return self.model(input_ids = batch["x"], decoder_inputs=["decoder_inputs"], decoder_labels = ["decoder_labels"] ) def training_step(self, batch, batch_id) """Want two losses, language modelling loss and semantic similarity loss""" # language modelling loss outputs = self(batch)[0] language_modelling_loss = outputs[0] # semantic similarity loss target_ids = batch["target_ids"] hypo_ids = self.model.generate(batch["x"]) # no gradients passed of course semsim_loss = 1 - nn.CosineSimilarity(dim=0)(target_ids, hypo_ids) return {"loss": language_modelling_loss + semsim_loss}
EDIT: The only method seems to be to use RL to simulate the sampling that occurs. see https://papers.nips.cc/paper/8682-training-language-gans-from-scratch.pdf 6
0
huggingface
Research
Adding features to a pretrained language model
https://discuss.huggingface.co/t/adding-features-to-a-pretrained-language-model/770
I’ve often thought about use cases where you think of word or sentence features that you know must be helpful to the system. Features that you would typically use in an SVM or a shallow network. I would want to know if those features still have the ability to add to the performance of a pretrained language model. So rather than just fine-tuning the language model, what are good ways to integrate custom features into LM without pretraining from-scratch? My guess is that you can just take the output from an LM and add a custom head on top that also takes in these other features. So basically the output of the LM serves as another set of features. This does not seem ideal though, since the final connections might be too shallow, I imagine that a better approach is possible that still involves finetuning the LM along side training the network that the custom features are part of. Any thoughts or best “tried and true” methods out there?
Hi Bram, One of my students studied exactly this phenomenon in a recent submission to SemEval: “UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information.” (https://arxiv.org/abs/2008.08547 69) Excerpts from the paper: We hypothesise that deep learning models, especially those that use pre-trained embeddings and so are trained on a small number of epochs, can benefit from corpus level count information. We test this on Sub-Task A using an ensemble of BERT and TF-IDF which outperforms both the individual models. For sub-task B, we hypothesise that these sentence representations can benefit from having POS information to help identify the presence of a target. To test this hypothesis, we integrate the count of part-of-speech (POS) tags with BERT. While this combination did outperform BERT, we found that a simpler modification to BERT (i.e. cost weighting, Section 3.5) outperforms this combination. And in terms of how the model was built: This ensemble model is created by concatenating the sentence representation of BERT to the features generated by the TF-IDF model before then using this combined vector for classification. In practice, this translates into calculating the TF-IDF vector for each sentence and concatenating it to the corresponding BERT output. This vector is then fed to a fully connected classification layer. Both BERT and the TF-IDF weights are updated during training.
0
huggingface
Research
Bart-base rouge scores
https://discuss.huggingface.co/t/bart-base-rouge-scores/683
Has anyone finetuned bart-base on xsum or cnn summarization task and willing to report the rouge score they got? I just got 15.5 for xum which feels low, since bart-large can get to 22 ish. @colanim @valhalla @VictorSanh ?
@sshleifer, could it be due to the adjust_logits issue ? Just a guess but as I posted there, after modifying the adjust_logits_during_generation BLUE-4 score for my model went from 13.09 to 19.14 for bart-base
0
huggingface
Research
Load/save HF block sparse model
https://discuss.huggingface.co/t/load-save-hf-block-sparse-model/1646
Hey everyone, I am exploring https://github.com/huggingface/pytorch_block_sparse 2 project. One of the issues that popped up almost immediatelly is loading a saved “sparsified” model. So, let’s say you sparsified Roberta using an example provided . Now that the model has been sparsified (it’s linear layers replaced with BlockSparseLinear nn modules) how can I load previously saved model using HF ecosystem? All I can think of is that I again need to create a Roberta model with uninitialized weights, sparsify it, and the load weights with model.load_state_dict(torch.load(PATH))? Am I overlooking something obvious?
No mechanism in place for loading as of now, which is ok. I sparsed the model again and loaded the weights manually via model.load_state_dict(torch.load(PATH)).
0