title
stringlengths 15
185
| link
stringlengths 53
219
| replies
int64 0
43
| views
int64 11
18.5k
| initial_post
stringlengths 4
20.5k
| initial_post_date
stringlengths 20
20
| responses
listlengths 0
20
|
---|---|---|---|---|---|---|
How to instill auxiliary information coupled with words into transformers? | https://discuss.huggingface.co/t/how-to-instill-auxiliary-information-coupled-with-words-into-transformers/4655 | 0 | 334 | Assume that auxiliary information is attached to some words. Our goal is to use them at finetuning for some tasks.Specifically, we want to finetune BERT or GPT-2 on texts with named entities. For instance, we want to feed “Jim (Person) bought 300 shares of Acme Corp. (Organization) in 2006 (Time)”, i.e., a text with named entities, to transformers instead of “Jim bought 300 shares of Acme Corp. in 2006”Note that such auxiliary information, e.g., named entities, is coupled with specific words in most cases.If we feed the above “annotated” sentence, a pretrained tokenizer breaks the words into pieces. Hence, the model would not notice the annotation, e.g., Organization, directs its corresponding word, e.g., Acme Corp.What would be the standard practice to instill auxiliary information coupled with words in a sentence into transformers? | 2021-03-19T02:06:10Z | [] |
Zero-shot and distillation - Improved distilled model over teacher model | https://discuss.huggingface.co/t/zero-shot-and-distillation-improved-distilled-model-over-teacher-model/4621 | 0 | 1,086 | A colleague and I each ran an experiment following the example found attransformers/examples/research_projects/zero-shot-distillation at master · huggingface/transformers · GitHub. Even though it was a zero-shot experiment we used data for which we had labels to evaluate how well the zero-shot prediction performed. When we ran the distillation part of our experiments we both were surprised to discover that the accuracy of the distilled student model was significantly higher than the zero-shot teacher model (experiment 1: accuracy of the distilled model 48.12% > accuracy of the zero-shot model 42.91%, experiment 2: accuracy of the distilled model 79.82% > accuracy of the zero-shot model 77.36%). In the second experiment there is a small possibility that this performance increase could be explained by chance (5000 examples), but not for the first experiment which has 86651 examples. I wonder if other people got similar improvement and if it’s a known phenomenon what would explain it. | 2021-03-18T17:53:41Z | [] |
XLSR-53: To group tokens or not to group tokens | https://discuss.huggingface.co/t/xlsr-53-to-group-tokens-or-not-to-group-tokens/4522 | 1 | 529 | In@patrickvonplaten's Fine Tuning XLSR-53 notebook, he mention how tokens shall not be grouped when computing metrics, in the case of that notebook, the WER metric. And that does make sense. However, later on in the notebook, he goes on to use the processor to decode the predictions and doesn’t pass thegroup_tokens=Falseargument to the method.Shouldn’t the way we decode to compute metrics and to output predictions be the same? Which way would be the correct one? This is probably a minor issue for languages that don’t duplicate graphemes that often, but I’m curious as it could impact the perceived performance one way or another.Could someone clarify this for me? | 2021-03-17T19:59:43Z | [
{
"date": "2021-03-18T06:57:08Z",
"reply": "Hey@jjdv,Could you check whether this issue answers your question:wav2vec2: `convert_tokens_to_string` contracts legitimately repeated characters · Issue #10619 · huggingface/transformers · GitHub?"
}
] |
NER for 2D text | https://discuss.huggingface.co/t/ner-for-2d-text/4451 | 0 | 414 | I’m looking for a method for NER on semi-structured text(ie. text with bounding boxes). The challenge with NER on semi-structured text is that because of the 2D nature of the text, we cannot rely on the usual IOB tagging schema to retrieve entities.Here’s an example where we want to extract the 2 addresses as LOC entitiesIn this setup, we have those labels (disregarding B-/I- since it’s not making sense anymore)Now, if we were to treat this as plain text by sequentially looking line by line, this would give usHere, weare mixing entities because each entity spreads across multiple lines, so retrieving entities from entity labels is not trivial.The only solution I’ve seen is toadd a subtask to group tokens into entities(treating it essentially as relation extraction). | 2021-03-16T17:21:53Z | [] |
Dealing with Imbalanced Datasets? | https://discuss.huggingface.co/t/dealing-with-imbalanced-datasets/4328 | 1 | 4,942 | Hi everyone,I am dealing with a binary classification task (non-English language) of relatively long documents (~4k words on average). I have tested a Logistic Regression trained on simplistic BoW features, yielding reasonable performance.I am now testing the multilingual BERT, with two linear layers on top of it and using the Cross-Entropy loss; however, its performance is quite low. The “annoying” part is that on a given test set, BERT always predicts the majority class. It is worth saying that the dataset (both train and test) is rather imbalanced (80/20).I have tried the following without any luck:a) Play around with the learning rate, class weighting, num of linear layers & associated configurations.b) Select different parts of the document as input to BERT.c) Generate balanced samples (incl. oversampling the minority class).I have also tried generating a synthetic toy dataset of 1K examples from one document belonging to one class and another 1K examples from one document belonging belonging to the other class - the performance was perfect, as expected.Is there something obvious that I am missing in terms of debugging my model? Is the problem the imbalanced nature of the dataset I am working with? Could a Focal loss (or anything else) help on this end? | 2021-03-11T21:18:10Z | [
{
"date": "2021-03-11T22:05:54Z",
"reply": "Hi@aguarius, my naive guess is that the length of your documents is the source of the low performance since BERT has a maximum context size of 512 tokens which is only a handful of paragraphs.One somewhat hacky approach to this could be to chunk your document into smaller passages, extract the hidden states per passage and then average them as features for your linear layers.What language(s) are in your corpus? That might be another source of difficulty since mBERT is not great on all of its languages and perhaps you can work with a better model like XLM-RoBERTa (or even a monolingual one if that’s possible)"
}
] |
How does BERT actually answer questions? | https://discuss.huggingface.co/t/how-does-bert-actually-answer-questions/4287 | 1 | 683 | have been trying to understand how the BERT model works. Specifically, I am trying to understand how it picks up up answers to questions on a given passage. I have tried followingthisblog post and whilst It has given me a nice intuition, I would like to better understand what is happening under the hood.From my understanding, the question and paragraph are tokenised separately and then go through the transformer model. Then, the dot product between the ‘transformed’ tokens and a START/END token is taken, with the higher result giving you that start and Finnish of the answer.What I would like to understand, what happens to the tokens in this “transformation” (i.e feedforward through the model) that makes it possible to take a dot product and therefore indicate if a word is a START/END. | 2021-03-10T15:30:01Z | [
{
"date": "2021-03-11T20:30:32Z",
"reply": "Hi@theudster, you can find a detailed tutorial on question-answering withtransformershere:https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb"
}
] |
Hugging Face Reads - 01/2021 - Sparsity and Pruning | https://discuss.huggingface.co/t/hugging-face-reads-01-2021-sparsity-and-pruning/3144 | 13 | 7,406 | Hugging Face ReadsJanuary 2021 - Sparsity and PruningByVictor Sanh,François Lagunas, andYacine JerniteIntroduction to the Hugging Face Reads (HFR) seriesNew year, new Hugging Face reading group! We are launching the Hugging Face Reads (HFR) series: each month, we will choose a topic to focus on, reading a set of four papers recently published on the subject. We will then write a short blog post summarizing their findings and the common trends between them, questions we had for follow-up work after reading them, and how recent advances in the area have affected our work at HF.The first topic for January 2021 is Sparsity and Pruning, and we are planning to address Long-Range Attention in Transformers next month. Enjoy, and come join the conversation here!IntroductionWhile large-scale pre-trained language models help solve an ever-growing set of natural language processing tasks, the progressive increase in their sizes raises concerns about their wide-scale applicability in practical settings, especially on devices with limited memory and computing power.Sparse neural network models which only use a fraction of the large parameter sets of their dense equivalents offer a promising avenue to reduce these computational demands. Recent works have proposed various methods to achieve impressive levels of sparsity, whether by gradually choosing which parameters to retain during training or by “pruning” the parameter set after the fact. This post presents an overview of four papers proposing or analyzing such methods. We review the following works: the(Chen et al., NeurIPS 2020)paper investigating the applicability of the Lottery Ticket Hypothesis to BERT-style models, the(Frankle et al., 2020)analysis of currently available methods to find sparsity patterns at initialization before doing any training, the(Li et al., 2020)work on the computational and performance trade-offs between training a large model to prune later vs. training smaller models right away, and the(Hooker et al., 2020)study of the biases introduced by current methods used to compress models (including pruning).Paper summariesFor each paper, we identify some of the claims and contributions, as well as some follow-up questions.The Lottery Ticket Hypothesis for Pre-trained BERT NetworksBy Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, Michael CarbinTheLottery Ticket Hypothesis(LTH) was initially developed and tested on computer vision systems. It states that given an initialization of a model, it is possible to find a subset of sufficient parameters during training: i.e., such that training only those parameters while setting the others to zero allows the model to reach the same performance as training the full model. Unfortunately, this subset can only be found aftersome amount of computation, and the method requires several iterations of re-training (either from scratch or from an earlier checkpoint, a method known as rewinding) and pruning for full effect. However, the approach can still end up improving training time and outputs a ready-to-use sparse model.This paper sets out to validate the LTH in NLP (and in particular in BERT-style models). Specifically, it asks whether sparse subnetworks of a model pre-trained with Masked Language Modeling (MLM) are sufficient to solve down-stream tasks. The answer is broadly positive.FindingsUsing a pre-trained initialization, BERT contains sparse subnetworks at non-trivial sparsities that can be fine-tuned in isolation to full performance on a range of downstream tasks.As opposed to previous work, these subnetworks are found at pre-trained initialization and not at random initialization (which was the case with the original LTH work). Rewinding does not significantly improve accuracy on downstream tasks.There are universal subnetworks that transfer to all studied downstream tasks. By further fine-tuning on the same task that was used for pre-training (Masked Language Modeling), the method finds a 70% sparse sub-network that can yield good results on all downstream applications.Follow-up questionsIn practice, the computational cost of fine-tuning is already much less than that of pre-training. How would “fine-pruning” (pruning while fine-tuning with methods such as movement pruning) a model on a downstream task compare to using the LTH-sparse model obtained with MLM (or with the downstream task)?The lack of impact of rewinding is in stark contrast with previous work on networks initialized from scratch and bears closer examination. For example, does this finding hold across fine-tuning learning rates? How much does the value of the selected parameters change over time?Pruning Neural Networks at Initialization: Why are We Missing the Mark?By Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, Michael CarbinThis paper analyzes the performance of several methods to prune networks at initialization, so even before training starts, to save on training time as the network is smaller or sparse (SNIP, GraSP, SynFlow, Magnitude pruning). The methods are allowed to sample the dataset to perform the pruning: this sampling can be considered negligible compared to the computation required for training. They compare the methods to two “upper bounds” representing the performance we can hope to achieve when given access to information that is available after training: Magnitude Pruning and Lottery Ticket Rewinding.FindingsAll proposed methods are better than random pruning, but they are not sensitive to the individual selection weights, only to pruning proportions on each layer. Even worse, selecting the weights with the lowest instead of the highest value of the utility criteria improves performance on some methods (GraSP), which appears to invalidate some of the original works’ claims.The methods are far from competitive with post-training approaches. Moreover, none of the methods is SOTA in all settings: some methods are better at some sparsity levels than others, but this depends on sparsity.The methods yield better results if they are applied after a few training steps rather than right away at initialization, but they need a significant amount of training to approach the proposed “upper bounds”.Follow-up questionsThe problem of finding a “good” subnetwork right at initialization seems somewhat under-defined and possibly overly difficult: which task or set of tasks is used to measure success? Is it even possible to find an ideal sub-networks that works on any possible task a priori? Consequently, it is hard to tell whether the mixed results stem from flaws in the methods or from the task’s inherent difficulty. More insights here would be particularly enlightening.The authors note that the studied methods prune “layers, not weights”, which may explain the surprising results they obtain by inverting the weight selection. In that case, would a dense model with adaptive layer sizes following the same patterns work as well?An interesting follow-up direction could be something along the lines of “pruning as soon as possible”. Recent“Bertology” workhas shown that pre-trained models learn different levels of skill in sequence: we are particularly eager to see follow up work that explores the relationship between the emergence of these skills and the optimal pruning strategy.Train Large then Compress, Rethinking Model Size for Efficient Training and Inference of TransformersBy Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, Joseph E. GonzalezThis paper explores the landscape of the computational tradeoffs between transformer models’ sizes and the required numbers of hyper-parameter settings and training steps to achieve a good performance. It finds larger sizes can allow for fewer hyper-parameter settings and training steps and offers some practical advice about choosing a larger initial number of parameters that can later be pruned to, counter-intuitively, reduce the overall computational cost of training a mode when compared to just training a smaller model from scratch.FindingsLarge models are faster to train: they reach a given precision faster when measuring optimizing steps/wall clock time/ flops, even when they are stopped before convergence. Absolute size is more important than depth or width alone, but depth can be more important than width in some cases. The faster convergence usually makes up for the faster execution of smaller models.Large models can be compressed to smaller networks. Training large networks might speed up training but would lead to problems at inference time, as their resource cost is much higher. This work finds that pruning them to networks that end up containing fewer parameters than the original smaller alternatives still yields higher performance. They can be quantized too with less quantization error.Batch size has an influence on training speed. In practice, this means that gradient accumulation should be used for larger models.Follow-up questionsThe results are impressive, but it can still be difficult to get some intuition for why the larger models converge to a better state faster and are easier to prune. The authors mention previous work hinting that deeper networks“promote movement along directions already taken”as a possible explanation, but we are definitely looking forward to reading further analysis.The connection to Lottery Ticket Hypothesis is mentioned only in passing. Further work exploring whether the sub-networks selected by the two approaches are similar in any fashion (such as by considering the Jaccard distance between the sets).Characterizing Bias in Compressed ModelsBy Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, Emily DentonThis paper sheds light on the impact of pruning on neural models for vision and shows that reported top-line accuracies often hide the disproportionate negative impact on certain classes of inputs. The paper connects this phenomenon to bias and fairness considerations.FindingsWhile the overall error is largely unchanged when a model is compressed (by pruning and quantization), there is a set of data that bears a disproportionately high portion of the error, with their accuracy falling by up to 50% while the overall performance only decreases by 1%, regardless of what the original accuracy was on the group.These examples (or at least some of them) can be consistently identified by comparing the predictions from a population of compressed models with the predictions from a population of non-compressed models on the same inputs: the examples where the predictions distributions diverge are called Compressed Identified Examples (CIE).CIE often correspond to low-frequency patterns in the inputs. Compression cannibalizes performance on low-frequency patterns in order to optimize the performance on higher-frequency patterns and preserve the overall accuracy.Compression thus amplifies biases of models (amplifying certain errors on certain types of inputs). The authors suggest using CIE as an auditing tool for compressed models: surfacing a tractable subset of the data for further inspection by domain experts to assess this issue.Follow-up questionsThis paper studies are pruning and quantization techniques that are run after training. One question that remains open is whether the models are facing an issue of modeling capacity (i.e., less-biased predictions require more representation power) or whether it is tied to the training procedure. Analyzing methods that reduce model size in the course of training or approaches such asgradual pruningwould shed some light on this question.Would up-weighting the CIE examples in training lead to models that are more robust to compression? Or would we expect to find different CIE groups?The authors suggest using CIE as a diagnostic tool. What can be done with the diagnostic? Are there other calls to action from these insights? For instance, how could we change existing benchmarks on compression to include robustness metrics (i.e., adding another component to the tradeoff size vs. accuracy on CIE groups)?Reading Group DiscussionThe quantitative results obtained on many of the common benchmark tasks by pruning are impressive. At the same time, they also remind us how much we still have to learn about the training dynamics of neural networks. Common wisdom states that “overparameterization helps with optimization”, but we have little theory available to help us understand the phenomenon further, especially in the deep attention-based models that perform so well in NLP.Each of the four papers above offers a different view of this question of modeling capacity vs. optimization vs. generalization.The Lottery Ticket Hypothesis relies on the quality of the initial state of the parameters at least as much as on the evolution of the weight values during optimization. As such, the main purpose of increasing the number of parameters would be to exponentially increase the chances of hitting a good sub-network at initialization.Other approaches focus more on how and whether the gradient flowing through the possibly redundant parameters help optimize the value of the ones we want to keep in the final pruned network: whether they try to evaluate that impact a priori as in the SynFlow algorithm or are content to simply keep them around for optimization based on their empirically proven efficiency and to prune them at the end of the training.All of the works outlined above, however, assume that the neural networks are indeed over-parameterized and that they can be pruned without changing their qualitative behavior. The CIE work questions that assumption and finds that pruning does change the behavior of the model in non-trivial ways. This assessment also agrees with some experimentsVictor Sanhhas run on the task for natural language inference, gradually pruning a model trained onmultiNLIand testing it on theHANSdataset. As the sparsity increases, the generalization as measured by the accuracy on the HANS test set decreases and gradually drops to 0 while the performance on the multiNLI test set stays mostly constant. Another experiment along those lines would be to see how much factual knowledge pre-trained language models lose as they are pruned (for example by monitoring closed-book QA accuracy for a model like T5).The question remains whether this loss of generalization and increased bias is a result of the model losing “expressive capacity” as its number of parameters decreases or whether the fault lies in the compression strategies that aren’t quite flexible enough, but the results certainly suggest that a large number of parameters serves as more than a crutch for optimization.Another question that is somewhat orthogonal to the one above is that of when to optimally prune weights from the model. Pruning early saves computation, but does not benefit from any signal from the target task. Pruning after training can take advantage of additional information but does not save any computation at training time or allow the parameters to adapt to the new sparsity pattern. Gradually pruning during training seems to provide the best of both worlds, but introduces a new set of hyper-parameters which may make optimization more costly. One should also keep in mind that actual computational gains will depend on the capabilities of current hardware and their ability to take full advantage of shifting sparsity patterns.We’re very much looking forward to the progress on all of these questions that 2021 is sure to bring!@HuggingFace: Sparsity and PruningWe first started investigating ways to make existing models more computationally efficient withDistilBERT, a method which was used to trainone of our most popular models. The follow-up on sequence-to-sequence models yieldedDistilBart, which also reaches similar performances to their larger counterparts at a lesser cost. Recently, we have also investigated approaches which focus on sparsity more specifically.Movement PruningMost of the works referenced above use magnitude pruning, a widely used strategy for pruning which thresholds weight values and simply sets the smallest ones to zero. In our work onMovement Pruningled byVictor Sanh, we argue that this approach is less effective in the context of transfer learning and highlight the importance of considering the changes of weights during fine-tuning as opposed to relying (mostly) on the pre-trained values. Code & hyper-parameters are availablehere.Block Movement PruningThe main drawback of unstructured pruning from a practical point of view is that current hardware can make it quite difficult to take full advantage of the sparsity pattern to accelerate the computation of the network. A compromise that can help alleviate this issue is the use of “semi-structured” sparsity patterns. By selecting blocks (typically 32x32) of weights instead of single weights, and running the same kind of optimization methods.Accelerating block sparse linear algebra is easier, and thepytorch_block_sparselibrary developed at Hugging Face is our attempt to show that. We are pretty confident more and more solutions for block-sparsity computation will emerge, and we will be working with major actors to enable it. We are already providing somesample networksthat take advantage of block sparsity, so you can judge by yourself!Finally, we also work to combine block sparsity with other accelerated sparsity patterns such as NVidia Ampere, to further decrease the memory, the compute and the energy used by the neural networks that will be everywhere in the near future. | 2021-01-12T14:36:43Z | [
{
"date": "2021-01-22T19:32:59Z",
"reply": "Hi@VictorSanhI noticed that your implementation of movement pruning involves some masked versions of BERT likeMaskedBertForSequenceClassification. Do you know whether these classes will become part of the main library at some point in the future?"
},
{
"date": "2021-01-22T20:17:13Z",
"reply": "Hi! I just wanted to add a wrap-up about our article in this context.Sparsifying Transformer Models with Differentiable Representation PoolingBy Michał Pietruszka, Łukasz BorchmannThe problem of having quadratic complexity w.r.t. the length of the attention mechanism in transformers is approached using pooling operations for reducing the number of word-vectors in between layers. The paper finds that even a hard selection of word-vectors outperforms Linformer and Reformer-based baselines on a long-documents summarization task both in speed and ROUGE scores. However, this hard selection remains suboptimal, as gradients are not propagated to each element in the sequence. This drawback was eliminated by introducing the novel pooling operation, namely The Successive Halving Differentiable Topk. It allows scoring each element in the sequence and selecting a predetermined number of word-vectors that achieved the highest score.FindingsWord-vector pooling allows achieving sub-linear complexity. Keeping the lower sequence length after the pooling is beneficial for the complexity of the next layers, FFNs, and even the decoder’s cross attention. More vectors can be eliminated in subsequent layers, further decreasing complexity as in the Pyramidion model.Massive saving on memory and time (16x and 3.5x respectively) are achieved while outperforming dense baselines at the same time. The time overhead for scoring and pooling is minimal, but the elimination of some information redundancy improves the training significantly.The best models were reusing a part of the saved computations for deepening the network.Follow-up questionsThe proposed Successive Halving Top-k operator is universally applicable. How do you want to use that in other fields? What are specific examples of tasks and model architectures?How can other methods (e.g., Linformer) benefit from keeping the lower number of word-vectors?I hope you will find it interesting!"
},
{
"date": "2021-01-23T17:40:32Z",
"reply": "Hi@lewtun, thanks for the question!Indeed all the linear layers (torch.nn.Linear) are replaced with custom modules that add scores matrices to accumulate the momentum for pruning.As of now, we have no plan to include it more broadly in the transformers library even though it is fairly straight-forward to do it: replace all the torch.nn.Linear and change the forward call. I believe@madlaghas some code to automatically do that on the fly, maybe he would be open to share about that?Victor"
},
{
"date": "2021-01-23T19:31:38Z",
"reply": "Thanks for the answer@VictorSanh! I was able to adapt your implementation to work with a customTrainerand the latest version oftransformersusing the approach you suggested. Nevertheless, I would certainly be interested in seeing how the mapping of BERT → MaskedBERT can be done on the fly"
},
{
"date": "2021-01-24T10:43:55Z",
"reply": "If I may ask a follow-up question: what is the heuristic for picking the number of warmup steps? Is it the first 6% of steps that one sees in the literature (e.g. the RoBERTa paper)?The reason I ask is that I want to run some experiments on a subset of SQuAD and am wondering how I should scale-down thewarmup_stepsargument accordingly"
},
{
"date": "2021-01-24T10:45:39Z",
"reply": "Hi@lewtun!I have almost finished my work on an extension of@VictorSanhwork, I tried to make it as generic as possible, to be able to patch any network with only minimal additional work, and to include it in your own training infrastructure.As@VictorSanhmentionned, it won’t probably be part of transformers, but a standalone tool. I will be releasing it in the following weeks (hopefully before end of month), so I hope you can wait until that point !To be able to patch a network “on the fly” you can use the approach I used inpytorch_block_sparse, using inspection of pytorch modules.(You can see the first results of my work I had a few weeks agoherefor example )François"
},
{
"date": "2021-01-24T10:52:12Z",
"reply": "Thanks a lot for the pointers@madlag! I didn’t know about your pytorch_block_sparse repo - this is exactly what I’m looking forI’ll keep an eye out for the release of your tool - do I understand correctly that this will enable people to incorporate e.g. movement pruning in their workflow or is it more focused on patching networks?Cheers,Lewis"
},
{
"date": "2021-01-25T13:41:54Z",
"reply": "That’s a good question!In my experience having between 5% and 10% ofwarmup_stepsis a good enough target.If your question is specifically for movement pruning. the most important thing (especially if you have a smaller dataset) isnotto prunetoo fast(i.e. having a total number of steps sufficiently high).@madlaghad some experiments where he just doubled the number of epochs (pruning more slowly) and improved the results I reported in the movement pruning paper.In the general case, some recent papers also echo this experimental trick ([2006.04884] On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines,[2006.05987] Revisiting Few-sample BERT Fine-tuning). Having enough epochs help stabilize the training especially for very small datasets.Victor"
},
{
"date": "2021-01-25T13:50:39Z",
"reply": "Thanks a lot for the tips and pointers to the literaure@VictorSanh- they’re really useful!"
},
{
"date": "2021-03-08T11:53:09Z",
"reply": "madlag:I have almost finished my work on an extension of@VictorSanhwork, I tried to make it as generic as possible, to be able to patch any network with only minimal additional work, and to include it in your own training infrastructure.As@VictorSanhmentionned, it won’t probably be part of transformers, but a standalone tool. I will be releasing it in the following weeks (hopefully before end of month), so I hope you can wait until that point !To be able to patch a network “on the fly” you can use the approach I used inpytorch_block_sparse, using inspection of pytorch modules.Hi@madlagIs this extension referring to the method of quantizing the layers other thannn.Linearandnn.LSTM, which are not supported by thequantizationapi of PyTorch by default?I have been experiencing issues while quantizing a GPT2 based model, where most of the layers containnn.Conv1D, using the Movement Pruning notebook in examples.Thanks,Mrigank"
},
{
"date": "2021-03-08T15:41:13Z",
"reply": "Hi@mriganktiwari, Francois is referring to his brand-newnn_pruninglibrary that extends the work done by Victor et al on movement pruning and provides an inference speed-up without quantization.If you’re having trouble quantizing GPT-2, one idea could be to convert it to ONNX format and then optimize the graph with ONNX runtime as done here:onnxruntime/Inference_GPT2_with_OnnxRuntime_on_CPU.ipynb at master · microsoft/onnxruntime · GitHubI’ve generally found the ONNX runtime supports more operators for quantization and the notebook I linked to shows you how to do it"
},
{
"date": "2021-03-08T16:37:43Z",
"reply": "Thanks@lewtunfor the quick response,I have already quantized my model via ONNX, now I was trying to usepruningas a way of further reducing the size and inference time for my DistilGPT2 model - and thought the Movement pruningnotebookfrom Hugging Face might be helpful.But I get it, that as of now the PyTorch quantization API does not support quantizing of t heConv1dmodule.So I’ll look for other ways to prune using the Tensorflow version of the model.Thanks again!"
},
{
"date": "2021-03-08T17:47:37Z",
"reply": "Just an idea: what if you prune the model first withnn_pruning, convert to ONNX and then quantize / optimize with ORT?I’m not sure whether the optimized models produced bynn_pruning(i.e. after the heads/rows/columns with zeroes are removed) can be exported to ONNX format, but this might be a way to get the best of both worlds"
}
] |
FDA Label Document Embedding | https://discuss.huggingface.co/t/fda-label-document-embedding/3654 | 9 | 1,425 | Hi everyone,I am looking for any ideas or advice that you guys may have obtained in similar situations.I have been working on an NLP task to cluster medical documents for some time, and whilst I am eager to use transformers to get the best results, through all my efforts it seems that TF-IDF has worked best.I am working with the SIDER side effect dataset, which provides annotated FDA medication labels, an example is here:http://sideeffects.embl.de/media/pdf/fda/17106s032lbl/annotated.html#C0026961_0I have tried TF-IDF and SciBert through sentence transformers, selecting the most relevant passages, but no amazing results yet. Does anyone have any ideas or previous experience?Many Thanks,Chris | 2021-02-15T22:01:34Z | [
{
"date": "2021-02-15T22:21:43Z",
"reply": "Hi@FL33TW00D, I ran into a similar problem last year with TF-IDF and found the following approach gave better results:Encode the documents, either with your favourite Transformer or Universal Sentence Encoder (the latter works really well!)RunUMAPon the embeddings to perform dimensionality reductionCluster withHDBSCANHTH!"
},
{
"date": "2021-02-15T22:46:37Z",
"reply": "Hi@lewtun,Thanks for the response.How did you manage to encode the entire document? Did you perform summarization or did you split it up into chunks and average?I’ve already included steps 2 and 3 in my pipeline, I feel its the representations that are holding me back! Do you think I should make an attempt to somehow include the annotations provided by the dataset into the representations?Many Thanks,Chris"
},
{
"date": "2021-02-15T23:11:40Z",
"reply": "In my case the documents were short emails, most of which could fit in the 512 token limit of USE - I did not try fancy things like summarization / chunking, but the latter would be my first thing to try for a long documentRegarding the annotations, theymighthelp, but you’d have to think carefully about how you plan to combine them with the embeddings before applying UMAP.Perhaps a “quick and dirty” approach would be to experiment with is concatenating the hidden states from multiple layers to see if that improves your document representation (assuming you’re just taking the last hidden state).Alternatively, you could try composing different UMAP models for different embeddings (see e.g.herefor a discussion), but I’ve never tried that so cannot vouch for its utility."
},
{
"date": "2021-02-15T23:26:10Z",
"reply": "@lewtun,This is great, thanks for the insight. Really pleased to see that the version of UMAP you linked supports semi-supervised, which is perfect!Will attempt the quick and dirty approach and report back.Many thanks,Chris"
},
{
"date": "2021-02-16T22:29:58Z",
"reply": "Hi@lewtun,Wanted to report back, did a lot of reading starting with the Universal Sentence Encoder (which I’d foolishly neglected in my previous passes over the literature). It looked like a great starting point but I was really looking for something like SciBERT that had the vocab needed to capture some of the more detailed parts of the data.Landed upon DeCLUTR (gitandpaper) and it looks like we are onto a winner!Many thanks for the input,Chris"
},
{
"date": "2021-02-17T08:32:10Z",
"reply": "Thanks for the pointer to DeCLUTR - I hadn’t heard of it and it looks like a really interesting and simple approach!"
},
{
"date": "2021-02-17T21:49:45Z",
"reply": "Hi@lewtun,Sorry to bother you on this again, just wanted to pick your brain on the optimal distance metric you found for UMAP? On their documentation they use Hellinger but this doesn’t work for negative values:Document embedding using UMAP — umap 0.5 documentationAlso wondered if you’d found a way to select the optimal dimensionality of the UMAP reduction in order to provide HDBSCAN with maximal info.Any insight or papers in this area would be much appreciated.Many thanks,ChrisEdit: On a second search of their documentation I found a much more helpful entry:Using UMAP for Clustering — umap 0.5 documentation, but would still love to hear your findings."
},
{
"date": "2021-02-19T21:26:06Z",
"reply": "Hi@FL33TW00Din my use case (emails), I was able to get good results with cosine similarity and 5 dimensions for the embedding space.Although not strictly a metric, cosine similarity is nice because it doesn’t care about the size of the documents - if you need a proper metric then you could try using the L2 normalised Euclidean distance (Cosine similarity - Wikipedia). I wish I could say that I got the dim=5 value through some deep intuition of topology, but it was mostly a form of trial and errorThe other UMAP parameters were left on their default values, which incidentally is similar to those used in the top2vec paper:https://arxiv.org/pdf/2008.09470.pdfI’m not aware of a principled way for deciding the optimal embedding dimension - perhaps you can try a simple gridsearch to see which one works best?"
},
{
"date": "2021-02-19T22:30:29Z",
"reply": "Hi@lewtun,Thanks for coming back to me, this confirms all my own preliminary findings, but will set up a grid search for concrete proof.Many Thanks,Chris"
}
] |
Likelyhood input sequence came from training set | https://discuss.huggingface.co/t/likelyhood-input-sequence-came-from-training-set/3684 | 0 | 331 | I’m wondering if there’s a way of using a transformer to generate some sort of metric which scores an input sequence based on how similar it is to the training data. My motivation is I’ve created my own tokeniser and trained a RoBERTa model using a moderately large corpus of IoT device descriptions. The descriptions contain lots of abbreviations, unusual ways of delimiting the text etc.When I pre-train, then fine tune a classifier the performance is good on some datasets and poor on others. I assume the variation is because some datasets aren’t similar enough to the training data.So ideally I’d like to compete P(x1,…xn) where x1…xn is the input sequence, i.e. assuming this sequence is similar to data seen in training P(x1,…xn) should be higher than if not.Given that the encoder produces a contextual embedding rather than probabilities I’m not sure if this is possible though? | 2021-02-17T10:06:50Z | [] |
Why are embedding / pooler layers excluded from pruning comparisons? | https://discuss.huggingface.co/t/why-are-embedding-pooler-layers-excluded-from-pruning-comparisons/3580 | 7 | 767 | Hi@VictorSanh,In your Saving PruneBERTnotebookI noticed that you only save the encoder and head when comparing the effects of pruning / quantisation. For example, here you save the original dense model as follows:# Saving the original (encoder + classifier) in the standard torch.save format
dense_st = {name: param for name, param in model.state_dict().items()
if "embedding" not in name and "pooler" not in name}
torch.save(dense_st, 'dbg/dense_squad.pt',)
dense_mb_size = os.path.getsize("dbg/dense_squad.pt")My question is: why are the embedding and pooled layers excluded from the size comparison between the BERT-base model and its pruned / quantised counterpart?Naively, I would have thought that if I care about the amount of storage my model requires, then I would include all layers in the size calculation.Thanks! | 2021-02-10T12:27:21Z | [
{
"date": "2021-02-10T22:41:49Z",
"reply": "Hey!The QA model actually only needs the qa-head, the pooler is just decorative (it’s not even trained). Start and end of spans are predicted directly from the sequence of hidden state. This explains why I am not saving the pooler.As for the embedding, I’m just fine-pruning the encoder, and the embedding modules stay fixed at their pre-trained values. So I am mostly interested in comparing the compression ratio of the encoder (since the rest is fixed).Hope that makes sense."
},
{
"date": "2021-02-11T08:44:49Z",
"reply": "Thanks for the answer@VictorSanh- this makes perfect sense!"
},
{
"date": "2021-02-13T21:38:46Z",
"reply": "Hi@VictorSanh, I have a follow up question about the Saving PruneBERT notebook.As far as I can tell, you rely on weight quantization in order to be able to use the CSR format on integer-valued weights - is this correct?My question is whether it is possible to show the memory compression benefits of fine-pruningwithoutquantizing the model first?What I’d like to do is quantify the memory reduction of BERT-base vs your PruneBERT model, so that one can clearly see that X% comes from pruning, Y% from quantization and so on.Thanks!"
},
{
"date": "2021-02-14T14:53:11Z",
"reply": "The notebook you are playing with isonlyapplying the weight quantization. It is taking as input the fine-pruned (pruned during fine-tuning) model, so to see the impact of the pruning (compression), simply count the number of non-zero values (in the encoder). That should give you the compression rate of pruning!Victor"
},
{
"date": "2021-02-15T10:05:44Z",
"reply": "Thanks for the clarification!Counting the number of non-zero values is a good idea to get the compression rate, but what I’d usually do to quantify the size on disk (e.g. in MB) is save the encoder’sstate_dictand get the size as follows:state_dict = {name: param for name, param in model.state_dict().items() if \"embedding\" not in name and \"pooler\" not in name}\n tmp_path = Path(\"model.pt\")\n torch.save(state_dict, tmp_path)\n # Calculate size in megabytes\n size_mb = Path(tmp_path).stat().st_size / (1024 * 1024)Now, my understanding is that if I load a fine-pruned model as followsmodel = BertForQuestionAnswering.from_pretrained(\"huggingface/prunebert-base-uncased-6-finepruned-w-distil-squad\")then the model is dense, so I don’t see any compression gains on disk when I save thestate_dict- is this correct?If yes, then do you know if there’s a way to save thestate_dictof a fine-pruned model to disk in a way that reflects the compression gains from a sparse encoder?Thanks!"
},
{
"date": "2021-02-16T16:38:59Z",
"reply": "Ooooh yeah sorry for the confusion.As far as I know (I think I tried), you can use the torch.sparse tensors representations which will decompose a sparse tensor into its CSR format (location of non-zero values + these non-zero values). It should give you a MB compression gain.The reason why I encoded the CSR format “by hand” is that sparse quantized tensors don’t exist yet in PyTorch so I had to do the quantization and the CSR format on top."
},
{
"date": "2021-02-16T21:10:23Z",
"reply": "Thanks for the tip abouttorch.sparse: from thedocsit seems to use the COO format which should also work wellAnd thanks for clarifying the reason for encoding the CSR format by hand - when I find a solution to the torch > 1.5 issue, I’ll expand the text accordingly!"
}
] |
Debugging the RAG question encoder | https://discuss.huggingface.co/t/debugging-the-rag-question-encoder/3550 | 2 | 550 | Hi- Thank you again for the awesome library & work.I have been trying to repurpose the RAG code to train on the KILT dataset. As I understand, during the training phase, document encoder (and the index) is fixed, only the query encoder and the generator are fine-tuned.As I train multiple epochs, something curios happens where the question encoder ‘collapses’ into emitting identical predictions regardless of the input. Specifically,out1andout2are identical, even though input embeddings are different.emb2 = torch.randn([1, 512, 768])
emb3 = torch.zeros([1, 512, 768])
# encoder
out1 = model.rag.question_encoder.question_encoder.bert_model.encoder(emb2)
out2 = model.rag.question_encoder.question_encoder.bert_model.encoder(emb3)The way this behavior manifests itself is that the question encoder starts pulling the same wiki entries regardless of the question.In fact, the last hidden states are identical for each token in the sequence.Screenshot from 2021-02-08 21-20-17823×215 26.1 KBI am curious if this type of behavior rings any bells? One hunch I have is whether mixed-precision training might be the cause. Any direction / feedback will be greatly appreciated, before I take the plunge and dig any further.Thank you!Deniz | 2021-02-09T05:21:54Z | [
{
"date": "2021-02-09T10:53:51Z",
"reply": "Hi ! There’s some discussion about that atRetrieval Collapse when fine-tuning RAG · Issue #9405 · huggingface/transformers · GitHubApparently it can happen in some setups"
},
{
"date": "2021-02-10T05:55:09Z",
"reply": "This is it! Thank you."
}
] |
Question about maximum number of tokens | https://discuss.huggingface.co/t/question-about-maximum-number-of-tokens/3544 | 1 | 5,400 | Hi,It is my understanding that all the pretrained models have a fixed maximum number of tokens (512 forbert-base-uncased). Suppose I have texts, that when tokenized exceed that number (like fictional text running through many paragraphs). I feel that there could be a better way than just using the first 512 tokens of the text. I could increase that limit, but my understanding is that for me to do that I have to train model from scratch and not be able to use the pretrained model. I would like to use the pretrained model.In order to achieve this I have an idea and need some feedback on that:Split the text into a list of sentences using a sentence Sentence Boundary Disambiguation tool.Tokenize each sentence using the model’s corresponding tokenizer.Create our new text, by keep the first and lastnsentences from the list and then taking a random subset of the rest of the sentences, such that all the tokens add up to 512.This will not restrict the input to only the first 512 tokens and will include random sentences from the middle of the text. Any thoughts on this approach? | 2021-02-08T23:47:17Z | [
{
"date": "2021-02-09T09:01:55Z",
"reply": "Sure, that is an option. You can also first run the text through a summarizer model and use the output as the input for your classifying model. There is no one “right” approach. You can try different things and see what works best for you."
}
] |
Science Tuesday: MARGE | https://discuss.huggingface.co/t/science-tuesday-marge/685 | 7 | 3,703 | For this science Tuesday, I read Marge, and wrote up a brief summary, as well as some interesting questions to discuss@joeddav@srush@VictorSanh@thomwolf@clem@julien-c@teven@patrickvonplaten@yjernite(only allowed 10 tags)Pre-training via Paraphrasing (MARGE)Paper: published June 26 2020Authors are from Facebook AI Research:Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, Luke Zettlemoyer.SummaryHuge models trained with masked-lm pretraining objective, or similar, memorize lots of facts in their parameters and don’t use an external storage to look up facts they are missing. Human brains have separate systems (it seems) for memorizing facts and generating language, and often google things. In this spirit, goal of many transformer+retriever models is to decompose memorization of facts and language understanding. MARGE stands for a Multi-lingual Autoencoder that Retrieves and GEnerates.The pretraining setup:reconstruct original document by retrieving related documents (from wiki) and trying to regenerate the original maximize likelihood of original doc conditional on retrieved docs, relevance scores. This implicitly forces the retriever to learn how to generate good relevance scores.There are some tricks related to not scoring all of wikipedia for every example while keeping relevant articles in each batch.Every 10k training steps, they remake their batches by computing the cosine similarity of every pair of docs, and then greedily adding source and target docs to batches such that the pairwise sum of cosine similarities increases the most. This obviously seems hacky, but allows them to get away without approximate NN or some other expensive way to find related docs. This, and the fact that a randomly initialized encoder will give docs with lexical overlap higher than random cosine similarity, allows the model to train from random.The retrieval model, ideally, can focus on getting the transformer all the facts that it needs while the transformer learns to paraphrase, which requires generating fluent language.For finetuning/inference, you don’t need to use the retrieval part.Marge performs…:comparably to XLM-Roberta, with 20% of the pretraining compute.comparably to mbart on de-en, en-zh translationSOTA on ml-sum, a cross lingual summarization taskKey contributions:(1) Most of the related work is not multilingual(2) most of the related work does not zero-shot well?(3) this pretraining objective unifies learning to retrieve and learning to generate. Previous work requires two pretraining stages.Related WorkRealm: “At a high level, the method goes like this: find the most similar text passages in BERT space, add those passages to the input as additional context, and then make a prediction.” -Joea few weeks agodifferent because the retriever has to be pretrained separately. Realm also seems to use mostly open domain QA benchmarks.RAG (Retrieval-Augmented Generation)Different because mostly focused on knowledge intensive benchmarks. MARGE can also do well on translation.Starts with bart-large + DPR, whereas MARGE pretrains end-to-end.Questions somebody could answer:Does MARGE outperform Bart on english only benchmarks like GLUE/ xsum summarization? Why did they only show multilingual benchmarks?When will there be code?How long does a forward pass take?What are the consequences of not using retrieval during inference. Does the model not “know” anything?Higher Level:Is Translation “knowledge intensive”?How could we measure hallucinations?Authors suggest that we should use a pre-training that is as close as possible to the dowstream task. Pegasus paper also suggests this. Where else could this idea be applied?Also these two talks are good:https://slideslive.com/38929793/beyond-bert(Mike Lewis at ACL)https://www.youtube.com/watch?v=KTQPWoQ7Ol8(Luke Zettlemoyer at AKCD) | 2020-08-11T22:51:57Z | [
{
"date": "2020-08-13T16:55:53Z",
"reply": "From Mike Lewis, the 1st author:We didn’t try very hard, but from what I saw MARGE lags a little behind BART on monolingual English tasks. It’s not too surprising, because I think having to be a good multilingual model just dilutes the capacity a bit. Similarly, XLM-R isn’t quite at RoBERTa level.code coming soonthey also retrieve from CC-News, not just wikipedia.“We’re going to look at retrieval during inference, but haven’t run that yet. Qualitatively, I think it’s a bit less prone to hallucination than BART because it (somewhat) knows that it doesn’t know anything. That means we get surprisingly literal zero-shot translations, because it tends not to make too much stuff up.”"
},
{
"date": "2020-08-13T18:45:25Z",
"reply": "Hadn’t read about this. Cool stuff!Every 10k training steps, they remake their batches by computing the cosine similarity of every pair of docs, and then greedily adding source and target docs to batches such that the pairwise sum of cosine similarities increases the most.You seem to imply that this is not an expensive operation, but it sounds very expensive: calculate vector for doc, cos sim betweenalldata pointsgreedily. Isn’t that super computationally expensive?"
},
{
"date": "2020-08-14T02:34:41Z",
"reply": "In the paper, they separated the dataset into many shards, each of which consists of similar documents, so that they can compute cosine similarity between the documents within the same shards. More generally, instead of shards you can use faiss to cluster the embeddings and compute kNN efficiently.Also, the forward pass of the embedding costs a fraction of each iteration of training in terms of the computes, so computing the embeddings isn’t expensive, either."
},
{
"date": "2020-08-14T07:07:36Z",
"reply": "Thanks, I am aware of faiss. We use it in our work, too as an alternative (and addition) to edit distance. It is very fast, but considering the size of the data set this will still take quite some time. If you want to compareall inputs to all other inputsat every x steps, that is still an expensive look up. But if I understand your comment correctly, documents are only compared within the same shards and the shards are created based on some preprocessing that clusters similar documents together. So all inputs are not compared with all the others, but only with those in their own shard."
},
{
"date": "2020-08-14T07:30:24Z",
"reply": "Right. But using faiss for every documents without using shards is actually still fast enough.Say, the training dataset contains 128 billion tokens. If your batch size is 2M tokens and you update every 10k iters, then you update the knn every 20B tokens. Since the embedder forward pass is about 6x (2x from using half as many layers and 3x from using forward only vs. forward+backward) faster than each iteration per document, the cost of getting embeddings costs as much as the training for 10k iters (128B/6 ~ 20B).Since the training dataset contains 128 billion tokens, and each document consists of 128 tokens (512 in the paper, so even fewer). Then, you have 1 billion vectors, and as in knn-lm you can use a subset of them for computing the centroids and then search (with GPUs) after quantization as usual. If you take a look at the original paper of faiss, you can see that the computes required for constructing kNN graph of 1 billion vectors is not much … actually about no more than 10 GPU-hours with a single V100, much smaller than what it takes to train the sota LM on 20 billion tokens, so it’s still fast enough relative to the training.Depending on your perspective, you may argue that this still costs too much or, for example, that batch size is too large in this case. My point is that the frequency of updating the knn is merely the hyperparameter that can be adjusted so as to make the knn part reasonably small. Since it’s not expensive in the case I suggested (which I believe is a reasonable one), MARGE isn’t inherently expensive. You can just make the cost reasonable by investingating the trade-off and find a reasonable compromise."
},
{
"date": "2020-08-14T08:13:23Z",
"reply": "Interesting! Thanks for the elaborate explanation. I can only encourage and be happy about more efficient models."
},
{
"date": "2021-02-08T22:52:24Z",
"reply": "@BramVanroy@AranKomatsuzakiI wonder if we can use the same strategy to fine-tune RAG retriever in an end-to-end manner since currently we only fine-tune the doc encoder."
}
] |
RAG for FEVER Dataset | https://discuss.huggingface.co/t/rag-for-fever-dataset/3541 | 0 | 389 | I had a few queries on running rag for FEVER?In the finetuning step as i understand from the paper for fever they would first regenerate the claim. So for seq2seq format would our “train.source” and “train.target” would be just the same claim?And then how do we actually make the model classify/give a fever label?In addition how do we after fine tuning, run just the code in inference mode where can either generate answers for Question Asnwering or a label for fever?Any help would be much appreciatedCheersShraey | 2021-02-08T16:22:52Z | [] |
Transfer learning to explore tasks' information requirements? | https://discuss.huggingface.co/t/transfer-learning-to-explore-tasks-information-requirements/3506 | 0 | 381 | Continuing the discussion fromACL 2020 highlights – Joe:ACL 2020 highlights – Joewhat kinds of datasets are useful for intermediate training and what downstream tasks they have a positive (or negative) effect on.This kind of question fascinates me. If intermediate training on Task A allows you to train target Task B more successfully, or if A and B as target tasks are affected in similar ways by each of several intermediate tasks, I’d strongly suspect that some of the same information is relevant to both A and B, and that the link between their respective successes (or failures) is the later layers of the encoder learning (or not learning) to fish that information out of the many other combinations of input features in the middle layers of the model.I see it as complementary to probing experiments. When you determine that a word’s encoding predicts some linguistic or psycholinguistic object – its lexical semantics, its position in a parse tree, its reading time, the probability that a human reader will notice that its agreement morphology is wrong – you’re giving an exact description of one kind of information that can be found in a text when you know its language. Transfer learning experiments are (at least initially) working with far more opaque descriptions: “Information sufficient to (mostly) recreate human-like readings of anaphors,whatever that might be.” But I’m fascinated by the potential for the two approaches to meet in the middle: Using the probes as tasks that (theoretically) isolate one particular kind of knowledge, to dissect the “whatever that might be” and find that humanlike anaphora resolution depends heavily on X kind of information, lightly on Y kind, moderately on Z, and there’s this residue we haven’t explained yet, but we can see what other tasks it’s relevant to and take a guess.A Transformer is, of course, not “wetware in disguise.” Not even structurally, let alone experientially. Finding the particular information that lets it imitate humans in some task is no guarantee that humans rely on the same information. If you want to uncover the cognitive particulars, how we do the task on an algorithmic level, BERT won’t tell you. But it can show us the shadow that the human algorithm casts onto the computational level, educate our guesses, help us prioritize our hypotheses. We’ll have to figure out whether X helps to predict human performance because we use X, or because X reflects a quirk of our processing that also affects our task performance, or what. But studying how this “hyperintelligent octopus” of ours gets around the atoll could at least indicate some of the currents that we too swim in.(Sincere apologies to Bender and Koller for abusing their metaphor.)On the techniques for studying transfer learning, I’ve had some discussions lately about the possibility of adversarial/amnesic intermediate tasks – using the training process to burn certain informationoutof the representations. Thinking about how to make sure that that happens, as opposed to just building a defiantly contrary task head, or a clueless one, or making the encoder all-around worse by flattening the representations. I have a bit of discussion about some of that in a feature request over on Github, and if you’ve read this far you’ll probably have some good ideas about it, so consider yourself invited! It’s atgithub.com/huggingface/transformersAdversarial/amnesic headsopened02:34AM - 04 Feb 21 UTCeritainFeature request# 🚀 Feature request
Task heads that backpropagate deliberately reversed gradi…ents to the encoder. A flag requesting this behavior when constructing a task head.
## Motivation
Transfer learning experiments lend themselves to questions about the extent to which two tasks rely on the same information about a word/sentence, and to experiments probing whether and how word encodings contain/correspond to syntax trees, lemmas, frequencies, and other objects of linguistic/psycholinguistic study.
A difficulty is that a pretrained model, without fine-tuning, may already encode certain information too thoroughly and accessibly for intermediate training to make much of a difference. For example, BERT's masked language modeling objective produces word encodings in which syntax information is readily accessible. Intermediate training on a syntax task requires training a task head to extract this information, of course, but it will result in very little reorganization of the encoder itself.
Adversarial training, such as the amnesic probing of Elazar et al. 2020, can avoid this pitfall. Intermediate training can aim to burn particular information *out* of the encodings, and measure how much this impairs trainability of the target task. Strictly reversing the sense of the training data won't do it though; getting all the answers exactly wrong requires just as much domain knowledge as getting them all right does. And randomizing the labels on training data may just result in a feckless task head, one that discards useful information passed to it from the encoder, rather than affecting the encoder itself.
Ideally, then, the task head would be trained toward correctly reproducing gold-standard labels, but would flip all its gradients before backpropagating them to the shared encoder, thus training it not to produce precisely the signals that the task head found most informative. The following work by Cory Shain illustrates flipping gradients in this way (although it's not applied to shared-encoder transfer learning, but rather to development of encoders that disentangle semantics from syntax).
https://docs.google.com/presentation/d/1E89yZ8jXXeSARDLmlksOCJo83QZdNbd7phBrR_dRogg/edit#slide=id.g79452223cd_0_19
https://github.com/coryshain/synsemnet
## Your contribution
I am deeply unfamiliar with pytorch, unfortunately, and utterly ignorant of tensorflow. I can't offer much. | 2021-02-05T00:19:30Z | [] |
Model or Dataset available for classifying a grammatical sentence? | https://discuss.huggingface.co/t/model-or-dataset-available-for-classifying-a-grammatical-sentence/3423 | 1 | 1,575 | I want to be able to classify if an input text is acompletesentence or not.The closest accurate definition of ‘being complete’ is if the sentence is a grammatical sentence.Also ‘being complete’ sentence, can depend on the context of the sentence but I want to focus on a sentence-like text as input for now.Example of a complete sentence:“You can write using one of the following styles”“You can write”“He writes code”Example of an incomplete sentence:“You can write using”“You can write using one”“He writes code for”I found this package for grammar checking which I am going to try:PyPIlanguage-tool-pythonChecks grammar using LanguageTool.I am wondering if there is an ML/DL solution for this problem. Is there a dataset or available model for this that you know? | 2021-01-28T19:38:13Z | [
{
"date": "2021-02-03T05:21:03Z",
"reply": "Hi@emadg,I don’t think language tools is best way to go here, because language tool will just check grammer and grammer does not predict wheather this sentence is complete or not. Here are the rules that are implemented in Language tool, you can check if there are any rules which will help you to classify a sentence as complete or not.https://community.languagetool.org/rule/list?sort=category&order=ascFor ML Approach, I think you can try using a Language model, You can trying looking for things which end a sentence and find probaibily of those words like (Punctuations, conjuction words) in the end. Lesser probability means very less chances that sentence will end there.PS: I will add If I found a concreate method to solve this."
}
] |
Generating coherent related text with generative model (GPT2 etc.) | https://discuss.huggingface.co/t/generating-coherent-related-text-with-generative-model-gpt2-etc/3417 | 0 | 512 | I am trying to generative sentences based on some context on one of the datasets fromdatasetslibrary. I’ve tried finetuning on some portion (50% train) of the dataset, but still not able to generate coherent related context. For instance, if some noun is present, the generated sentence should most likely be related to noun or perform some form of coreference resolution, needs to know the verb from context, and talk about it in generated sentence. I do not see anything happening like that. The generated sentences are infact very divergent from the context. I’d appreciate if you can suggest some methods, papers (with code) for tackling this problem. Thanks. | 2021-01-28T15:32:41Z | [] |
RoBERTa trained on NSP | https://discuss.huggingface.co/t/roberta-trained-on-nsp/3133 | 0 | 620 | I want to perform experiments with RoBERTa that has been trained on MLM+NSP task. In the paper, NSP was discarded because of lower performance and wasn’t made publicly available by the authors. Does anyone have good suggestions about if it is available in some form or a implementation that can replicate it in the same manner (with pre-training) ? I knowtransformersprovide support but there’s not much room to make error due to restricted GPU access time, so if both model weights and implementation isn’t available, I’d really appreciate if someone can provide a working training routine withtransformers. | 2021-01-12T03:58:54Z | [] |
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity | https://discuss.huggingface.co/t/switch-transformers-scaling-to-trillion-parameter-models-with-simple-and-efficient-sparsity/3137 | 1 | 1,573 | Interesting new paper from Google improving upon T5.arXiv.orgSwitch Transformers: Scaling to Trillion Parameter Models with Simple and...In deep learning, models typically reuse the same parameters for all inputs.
Mixture of Experts (MoE) defies this and instead selects different parameters
for each incoming example. The result is a sparsely-activated model -- with
outrageous numbers... | 2021-01-12T08:13:46Z | [
{
"date": "2021-01-20T15:37:20Z",
"reply": "Just to add to the previous post… Google Brain recently unveiled a language model of 1.6 trillion (1.6E+12) parameters with performance equal to or better than the SOTA on several NLP tasks. It surpasses the 175 billion (1.75E+11) parameters of GPT-3. The mastodon was made possible by the development of a new attention-based architecture (switch transform) that divides training data and parameters between a multitude of sub-models or mix of experts connected by trainable gating. Despite its gigantic size, this text-to-text model would have been 7 times faster to train on the C4 (Colossal Clean Crawled Corpus, 750 GB) using the same amount of computation. The original article:https://bit.ly/2LQzsmJ, the source code:http://bit.ly/390j0ZY"
}
] |
Multilingual token, phrase and sentence representations for text similarity | https://discuss.huggingface.co/t/multilingual-token-phrase-and-sentence-representations-for-text-similarity/3167 | 0 | 475 | Hello allFor some research of mine, I am looking for the best way to get sentence representations as well as phrase and word representations that will be used for text similarity. Specifically, I want to compare the representations of translated sentences, as well as their aligned individual words and word groups (phrases). I could just use something like mT5 or XLM-R and use the final hidden states of the subword units and pool them to create these representations, however my fear is that they are not well-suited for a text similarity task. This issue was also raised by the people over at SentenceTransformers intheir paper, who propose to finetune LMs on STS and other tasks to get sentence representations that are actually meaningful in a text similarity context. I could try these models, but as far as I know they never do any token similarity tests - only sentence similarity.So if you have any ideas, perhaps some previous research that you read, or a new model that was actually evaluated on segment and token similarity, then I’d love to hear it!Thanks in advanceBram | 2021-01-13T08:17:58Z | [] |
Classification problem difficulty when going from 3 classes to 5 classes? | https://discuss.huggingface.co/t/classification-problem-difficulty-when-going-from-3-classes-to-5-classes/3037 | 1 | 354 | This question is conceptual in nature.Suppose I’m working on a text classification problem where I have 3 labels. To make the problem more concrete, let’s say I’m working on sentiment analysis with ground-truth labelspositive,neutral, andnegative. I am measuring accuracy and macro-F1.Now I’d like to make another data set with 5 ground-truth labels:very positive,positive,neutral,negative, andvery negative. Intuitively, I would think that the 5-label classification problem is more difficult than the 3-label problem, but the only “proof” I can think of is that a random guess is correct only 1/5 of the time with 5 labels but a random guess is correct 1/3 of the time with 3 labels.Is there a more formal machine learning argument for why a 5-label problem is more difficult than 3-label? How about an N-label problem to an M-label problem where M > N?I’m willing to brush up on Vapnik–Chervonenkis theory if that’s needed (hopefully not). | 2021-01-03T23:55:42Z | [
{
"date": "2021-01-11T20:22:17Z",
"reply": "Any help, intuition, hints, pointers, or references would be appreciated."
}
] |
Text to Text Transformer - T5 | https://discuss.huggingface.co/t/text-to-text-transformer-t5/3008 | 2 | 1,079 | Hello, I am trying to understand how T5 sentencepiece impacts custom data set. I know T5 does not use lossless training(mT5 does) but unsure of what impact it may have on any custom tokens in my dataset. Can someone please chime in if you have some insight ?Thanks | 2020-12-31T13:53:55Z | [
{
"date": "2021-01-03T21:53:58Z",
"reply": "What do you mean by “lossless” training?"
},
{
"date": "2021-01-04T20:06:14Z",
"reply": "Sorry I meant lossless Tokenization. Please refer to section 3.1 in link belowarxiv-vanity.comSentencePiece: A simple and language independent subword tokenizer and...This paper describes SentencePiece, a language-independent subword\ntokenizer and detokenizer designed for Neural-based text processing,\nincluding Neural Machine Translation. It provides open-source C++ and\nPython implementations for subword units....from the paper:We call this design lossless tokenization, in which all the information to reproduce the normalized text is preserved in the encoder’s output. The basic idea of lossless tokenization is to treat the input text just as a sequence of Unicode characters. Even whitespace is handled as a normal symbol. For the sake of clarity, SentencePiece first escapes the whitespace with a meta symbol _ (U+2581), and tokenizes the input into an arbitrary subword sequence, for example:"
}
] |
Shortformer: Better Language Modeling using Shorter Inputs | https://discuss.huggingface.co/t/shortformer-better-language-modeling-using-shorter-inputs/3007 | 0 | 460 | Interesting paper focusing on shorter context windows and improving training speed!ofir.ioshortformer.pdf349.75 KB | 2020-12-31T10:02:40Z | [] |
Don't Stop Pretraining BART | https://discuss.huggingface.co/t/dont-stop-pretraining-bart/2986 | 1 | 888 | Hi, I would like to try the approach suggested in “Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks” (link) for BART. I have my own dataset but there are 2 things that are still unclear to me.I believe I should start with BartForConditionalGeneration , as that is the LM model. is that right?Can anyone provide more details on the noising algorithm that was used to train the model? The paper is pretty vague about it, as these are the only details I foundA number of text spans are sampled, with span lengths drawn from a Poisson distribution(λ = 3)We mask 30% of tokens in each document, and permute all sentences. | 2020-12-28T20:25:08Z | [
{
"date": "2020-12-29T06:35:38Z",
"reply": "Hi@ErpaYes,BartForConditionalGenerationis the LM model.Currently seq2seq pre-training examples are not available in transformers.FairSeqhas the implementation of Bart denoising dataset, so that might help, You can find ithere"
}
] |
Pre-training with Lamb optimizer | https://discuss.huggingface.co/t/pre-training-with-lamb-optimizer/1647 | 7 | 4,099 | Hello everyone,Has anyone experimented with Lamb optimizers in HF? I tried usinghttps://github.com/cybertronai/pytorch-lambbut I was only marginally able to increase batch size and the training loss curve was rather flat. If you’ve used lamb would you please share some tips. How did you initliaze it? I am not sure what to use in optimizer_grouped_parameters list of dictionaries that wrap model parameters. Also, I’ve seen some other people use a different lr scheduler with Lamb.Thanks in advance. | 2020-10-20T09:12:51Z | [
{
"date": "2020-10-26T05:53:19Z",
"reply": "Hi vblagoje,I am new to transformer. I have been playing the hugging face model for several month and I think I am thinking to made a some small changes on the Bert Model and pretrain it from scratch. I saw you discussing on another post several days ago about the pretraining process. I was wondering if you know the pretraining repository made by Nvidia?github.comDeepLearningExamples/PyTorch/LanguageModeling/BERT at master ·...master/PyTorch/LanguageModeling/BERTDeep Learning Examples. Contribute to NVIDIA/DeepLearningExamples development by creating an account on GitHub.I think they implemented the lamb optimizer, NSP objective and wrote code to better utilized multiple gpu during distributed training. I still haven’t use it yet because I have some trouble with installing docker on the remote machine I am working on. I was just wondering if you already seen this repository or tried it, or if you have any advice on pretraining bert from scratch?"
},
{
"date": "2020-10-26T10:31:42Z",
"reply": "Hey@zeyuyun1,Yes, I am aware of the NVidia repo, however, I haven’t used their scripts. I would like to use the HF library to train BERT from scratch using HF Trainer class, HF datasets project, and helper classes likeDataCollatorForNextSentencePrediction. NVidia scripts are excellent but noisy, with lots of engineering details explicitly mixed with the BERT specifics. These engineering details should be hidden; using the above classes and projects is a step in the right direction to minimize the engineering details.And yes you are right; they use FusedLamb from apex optimizers package. I was able to integrate FusedLamb as well. I am currently tuning the multi-node multi-GPU distributed training and once I am done, I’ll share the script. But yes, so far on a single instance I can train BERT tiny or BERT mini without any major issues.Hope this answers some of your questions. I’ll share the scripts I am working on once I have them training BERT base on multi-node multi-GPU distributed training setup.Cheers,Vladimir."
},
{
"date": "2020-10-27T00:52:15Z",
"reply": "Thank you so much! I’ll look into the training process using HF Trainer too."
},
{
"date": "2020-11-18T10:16:40Z",
"reply": "vblagoje:training loss curve was rather flatI have tried the same repo and the same case happened. the loss curve went flat after a few iterations. were you able to lay your hand on any other implementations?"
},
{
"date": "2020-11-18T14:05:03Z",
"reply": "Hey guys, I am using apex.optimizers FusedLamb and it’s working well. I’ll publish my work in about a week or two. I can now train bert-mini on lambdalabs 8x Tesla V100 single machine in about 3 hours and 40 min. The above-mentioned NVidia training trains the same model in about 2 hours and 30 min. My goal right now is to match the performance of equivalent Google/NVidia baked models on various LM tests (Glue etc) and then I’ll focus on closing the training speed performance.Best,Vladimir"
},
{
"date": "2020-12-24T01:06:19Z",
"reply": "Hi Vladimir,Would you mind sharing your training code? I still didn’t figure out how to implement FusedLamb."
},
{
"date": "2020-12-28T17:24:09Z",
"reply": "Hey there, I’ll share all the details in a week or so. Until I really wrap this up note that I used thisscriptto create sharded datasets for bert training. After dataset preparation, I used thisscriptto train BERT. There are still a few small bugs to iron out but it works quite well. I can train bert base in about 8-9 hours on 8gpu machine using Pytorch distributed training.HTH,Vladimir"
}
] |
About the encoder and generator used in the RAG model | https://discuss.huggingface.co/t/about-the-encoder-and-generator-used-in-the-rag-model/2959 | 2 | 790 | Hi, I have questions about the Rag model.In this paper, the query encoder is DPR and the generator is Bart.My questions are:Is the generator a full Bart or just the decoder part of the Bart.If I implement a Rag with the encoder part of Bart as the query encoder, and decoder part of the Bart as generator. Does that make sense w.r.t the Rag concept? I think this is more intuitive to me. why they use a ‘heterogeneous’ setting?Thanks. | 2020-12-25T09:25:26Z | [
{
"date": "2020-12-25T14:15:03Z",
"reply": "Hi,generator is Bart encoder-decoder. If you have a rag model, you can access it bymodel.generatorRAG’s question-encoder is not the same as RAG’s generator’s encoder … This really may be confusing, so let me try to explainquestion encoder is for encoding “question” to retrieve “documents” (or so-called “contexts”) from retriever.Then, retriever will concatenate “contexts” with “question” ; this concatenated texts are the new input.This new input will be encoded by Bart’s encoder to generate answer via Bart’s decoderHope this helps!"
},
{
"date": "2020-12-25T18:08:49Z",
"reply": "Hi, thanks for the reply! I get it better."
}
] |
MRPC Reproducibility with transformers-4.1.0 | https://discuss.huggingface.co/t/mrpc-reproducibility-with-transformers-4-1-0/2884 | 0 | 494 | Hi, I always get lower precision following the MRPC example from text-classification in transformers, what’s the reason?python run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/and get precision like the followings, while the document says it’s 0.88 averaged.12/18/2020 17:16:38 - INFO - __main__ - ***** Eval results mrpc *****
12/18/2020 17:16:38 - INFO - __main__ - eval_loss = 0.5318707227706909
12/18/2020 17:16:38 - INFO - __main__ - eval_accuracy = 0.7622549019607843
12/18/2020 17:16:38 - INFO - __main__ - eval_f1 = 0.8417618270799347
12/18/2020 17:16:38 - INFO - __main__ - eval_combined_score = 0.8020083645203595
12/18/2020 17:16:38 - INFO - __main__ - epoch = 3.0
12/18/2020 16:45:29 - INFO - __main__ - ***** Eval results mrpc *****
12/18/2020 16:45:29 - INFO - __main__ - eval_loss = 0.47723284363746643
12/18/2020 16:45:29 - INFO - __main__ - eval_accuracy = 0.8063725490196079
12/18/2020 16:45:29 - INFO - __main__ - eval_f1 = 0.868988391376451
12/18/2020 16:45:29 - INFO - __main__ - eval_combined_score = 0.8376804701980294
12/18/2020 16:45:29 - INFO - __main__ - epoch = 3.0
12/18/2020 16:34:37 - INFO - __main__ - ***** Eval results mrpc *****
12/18/2020 16:34:37 - INFO - __main__ - eval_loss = 0.571368932723999
12/18/2020 16:34:37 - INFO - __main__ - eval_accuracy = 0.6838235294117647
12/18/2020 16:34:37 - INFO - __main__ - eval_f1 = 0.8122270742358079
12/18/2020 16:34:37 - INFO - __main__ - eval_combined_score = 0.7480253018237863
12/18/2020 16:34:37 - INFO - __main__ - epoch = 3.0GPU: GTX 1080transformers: 4.1.0Torch: 1.6.0python: 3.8Server: Ubuntu 18.04 | 2020-12-19T06:24:38Z | [] |
Using transformers (BERT, RoBERTa) without embedding layer | https://discuss.huggingface.co/t/using-transformers-bert-roberta-without-embedding-layer/2807 | 8 | 3,913 | I’m looking to train a RoBERTa model on protein sequences, which is in many ways similar to normal nlp training, but in others quite different.In the language of proteins, I have 20 characters instead of the normal 26 characters used in english (it is 26 right? :D), so that is rather similar. The big difference is that you don’t really combine the characters in proteins to actual words, but rather just keep each character as a distinct token or class.Hence essentially my input to the transformer model could just be a list of numbers ranging from 0-19. However that would mean that my input would only have 1 feature if I did that, and I’m not sure a transformer could work with that?I’m thinking of just doing a onehot encoding of these characters, which would give me 20 input features. However this is of course still very low in comparison to how normal transformers are trained, where d_model is somewhere in the range of 128-512 if I understand correctly.Does anyone have any experience with anything like this? any good advice for how it is most likely to work? | 2020-12-13T18:16:24Z | [
{
"date": "2020-12-13T21:34:43Z",
"reply": "Hey,I’d recommend taking a look at this repo:GitHub - agemagician/CodeTrans: Pretrained Language Models for Source codeby@agemagician. This repo uses transformer models for protein sequences if I understand it correctly.Also, taking a look at those models:huggingface.coRostlab (Rostlab)We’re on a journey to advance and democratize artificial intelligence through open source and open science.might help. Not sure if there is a notebook on doing protein sequence LM, maybe@agemagicianhas a good pointer by chance"
},
{
"date": "2020-12-13T22:00:24Z",
"reply": "Hi@tueboesen,Yes, it will work. It can give you a very close results compared to MSA methods, sometimes even better results. If you combine it with MSA, it will even give you a better results compared to MSA methods alone.We have trained (Transformer XL, XLNet, Bert, Albert, Electra and T5) for Uniref100 and BFD dataset. I would recommend to simply use on of these models, because it requires tremendous amount of computing power to reach good results.You can find them here:GitHubGitHub - agemagician/ProtTrans: ProtTrans is providing state of the art...ProtTrans is providing state of the art pretrained language models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using Transformers Models. - GitH...huggingface.coRostlab (Rostlab)We’re on a journey to advance and democratize artificial intelligence through open source and open science.You can find more details on our paper:bioRxiv – 21 Jul 20ProtTrans: Towards Cracking the Language of Life’s Code Through...Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we...Facebook also trained Roberta using Unrief50 dataset:GitHubGitHub - facebookresearch/esm: Evolutionary Scale Modeling (esm): Pretrained...Evolutionary Scale Modeling (esm): Pretrained language models for proteins - GitHub - facebookresearch/esm: Evolutionary Scale Modeling (esm): Pretrained language models for proteinsUnfortunately, we don’t have a notebook for training from scratch, but you can find more details to replicate our results here:github.com/agemagician/ProtTransSource code of the modelsopened11:49PM - 10 Oct 20 UTCclosed08:01PM - 11 Oct 20 UTChadimquestionDo you have the source code of the various pre-trained models?@patrickvonplaten:You meant :GitHubGitHub - agemagician/ProtTrans: ProtTrans is providing state of the art...ProtTrans is providing state of the art pretrained language models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using Transformers Models. - GitH...Not :GitHubGitHub - agemagician/CodeTrans: Pretrained Language Models for Source codePretrained Language Models for Source code. Contribute to agemagician/CodeTrans development by creating an account on GitHub.ProtTrans: Provides the SOT pre-trained models for protein sequences.CodeTrans: Provides the SOTpre-trained models for computer source code."
},
{
"date": "2020-12-16T15:35:15Z",
"reply": "agemagician:gy and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontieWow this is an amazing response, thank you so much for this. I will need some time to digest it all, but this is exactly what I need!"
},
{
"date": "2020-12-16T16:21:56Z",
"reply": "Is there a way for me to use any of the models to return probability distributions?More specifically I would like to see how exactly the model has learned and test it out a bit. To this effect I would love to be able to feed it a protein sequence where I have masked out some of the amino acids, and then have it return a probability distribution for the full returned protein.I’m sure this is possible, after all this is how the model was trained in the first place, but I’m just a bit overwhelmed by all the models, so I haven’t managed to figure out how to do this."
},
{
"date": "2020-12-16T17:37:43Z",
"reply": "You can find an answer to your question here:https://github.com/agemagician/ProtTrans/issues/5"
},
{
"date": "2020-12-16T19:19:29Z",
"reply": "Hmm that still doesn’t quite do it unless I’m missing something.This does allow masking of a sequence, but you can only mask 1 amino acid in the sequence, and it doesn’t give the actual probabilities on output, but only the top5 probabilities for that single masked amino acid."
},
{
"date": "2020-12-16T19:55:38Z",
"reply": "You can send “top_k” parameter to “fill-mask” method, to return more/all tokens.Check here:github.comhuggingface/transformers/blob/1c1a2ffbff2052100053cddb3a87d45fb9d210ca/src/transformers/pipelines.py#L1184\"\"\"def __init__(self,model: Union[\"PreTrainedModel\", \"TFPreTrainedModel\"],tokenizer: PreTrainedTokenizer,modelcard: Optional[ModelCard] = None,framework: Optional[str] = None,args_parser: ArgumentHandler = None,device: int = -1,top_k=5,task: str = \"\",):super().__init__(model=model,tokenizer=tokenizer,modelcard=modelcard,framework=framework,args_parser=args_parser,device=device,binary_output=True,If it is still doesn’t fit your use-case, then you have to implement it your self."
},
{
"date": "2020-12-16T21:00:52Z",
"reply": "Something like that could be a good starting point for you:colab.research.google.comGoogle Colaboratory"
}
] |
What are some recommended pretrained models for extracting semantic feature on single sentence? | https://discuss.huggingface.co/t/what-are-some-recommended-pretrained-models-for-extracting-semantic-feature-on-single-sentence/2698 | 4 | 1,392 | Hi, I am more a CV guy and recently get interested in doing a nlp project.In this project, one part might involve extracting sentence-level semantic representation from a pretrained model.In computer vision, one standard way to extract feature of an image or a video snippet could beusing Resnet pretrained on Imagenet or I3D pretrained on Kinetics datasets, respectively.I want to do the similar thing but in nlp domain. I wonder if there are some recommended models pretrained on specific dataset for me to try?As far as my limited understanding, models trained on datasets which aim to to tell if two sentences are semantically equal could be a direction (e.g. QQP, STS-B ). But it needs a pair of sentences, my case is just feeding one sentence (or one block of sentences), not in a pair format. Any suggestion? Thanks! | 2020-12-08T14:32:32Z | [
{
"date": "2020-12-12T04:59:22Z",
"reply": "Hi! IMO, Bert could be comparable to ResNet as the baseline. (you can uselast_hidden_statevariable ofBertModeljust like the global-pooled features of ResNet) Then, newer models like Roberta and many more could be comparable to EfficientNet etc."
},
{
"date": "2020-12-12T08:29:45Z",
"reply": "Seems like you are looking for the Sentence Transformers library which trains Siamese BERT (etc.) networks on NLI data. That means that you can indeed pass one sentence to get a sentence embedding. They also have a few finetuned models that use cross-encoders instead. Those are obviously slower but lead to better performance on downstream tasks such as STSb.github.comUKPLab/sentence-transformersSentence Embeddings with BERT & XLNet. Contribute to UKPLab/sentence-transformers development by creating an account on GitHub."
},
{
"date": "2020-12-12T16:04:53Z",
"reply": "Thanks for reply. And it seems sentence-BERT , LaBSE, and Universal Sentence Encoder are other some choices for sentence embeddings."
},
{
"date": "2020-12-14T03:57:06Z",
"reply": "Benchmark-wise speaking, I have some new idea : sinceSuperGLUEis one of the most difficult (multi-)task on language understanding. And since T5 is the current SOTA on this benchmark so we can also try embedding vectors from T5.Previously, this may not be straightforward to extract (since T5 is encoder-decoder), but the latest master version of Huggingface now containsT5 encoder’s only modelwhich we can directly extract the vector of the pretrained model. (Thanks to@agemagician) … So this is interesting choice IMO"
}
] |
BORT: Optimal Subarchitecture Extraction for BERT | https://discuss.huggingface.co/t/bort-optimal-subarchitecture-extraction-for-bert/2562 | 1 | 535 | Hi guys,Wondering if anyone has read the new paper from the Alexa team regarding BERT size reduction.arXiv.orgOptimal Subarchitecture Extraction For BERTWe extract an optimal subset of architectural parameters for the BERT
architecture from Devlin et al. (2018) by applying recent breakthroughs in
algorithms for neural architecture search. This optimal subset, which we refer
to as "Bort", is...GitHubGitHub - alexa/bort: Repository for the paper "Optimal Subarchitecture...Repository for the paper "Optimal Subarchitecture Extraction for BERT" - GitHub - alexa/bort: Repository for the paper "Optimal Subarchitecture Extraction for BERT"If anyone has any thoughts on it or would like to discuss please comment here.Thanks | 2020-12-04T18:34:00Z | [
{
"date": "2020-12-05T04:16:15Z",
"reply": "Super interesting, thanks for sharing!! Perhaps@VictorSanhcan give us the best commentsWondering if the same technique can be efficiently used for the giant models like T5-11B and GPT-3"
}
] |
Training generative models based on "rewards" | https://discuss.huggingface.co/t/training-generative-models-based-on-rewards/2576 | 0 | 285 | Suppose you we want to train BART/T5. Typically these models are trained assuming that we have direct access to gold outputs. I am interested in a slightly different setting: suppose you don’t have the gold output, but you have access to a black-box (a reward function) that tells you how “correct” is the current generation. Does anyone have thoughts on how this could be done? | 2020-12-04T22:23:52Z | [] |
EMNLP Picks from the Hugging Face Science Team | https://discuss.huggingface.co/t/emnlp-picks-from-the-hugging-face-science-team/2424 | 1 | 4,040 | The Hugging Faceteam had a great time attending EMNLP the other week. Virtual conferences are tricky, but I personally have come to enjoy some aspects of it like the pre-recorded presentations and gather.town mingling. And not having to travel is a plus, tooLast week a few of us on the science team tried to each select 4-5 presentations we’d recommend others on the team to check out. I’ve compiled our suggestions and included them here for those of you that are interested in our picks & very brief comments. Included are suggestions from myself,@VictorSanh,@yjernite, and@canwenxu(including a couple repeats).There was an incredible amount of high-caliber work and we couldn’t share all but a few that we thought our team might be interested in, so free to respond with any suggestions (or comments) of your own!Victor’s picks (@VictorSanh)BLEU might be Guilty but References are not InnocentPaper:https://arxiv.org/abs/2004.06063Presentation:https://slideslive.com/38938647Discuss a new reference generation method for calculating more reliable automatic scores (including BLEU) that correlate better with human judgement. + a dataset of references (included in sacrebleu i believe)Learning from Task DescriptionsPaper:https://www.aclweb.org/anthology/2020.emnlp-main.105.pdfPresentation:https://slideslive.com/38939344Introduce a new dataset for structured task-oriented evaluation on unseen tasks (0-shot settings) conditioned on a description of the task in natural language. (nice discussion, less convinced by the dataset itself)Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)Paper:https://www.aclweb.org/anthology/2020.emnlp-main.16/Presentation:https://slideslive.com/38939219Model can learn to represent linguistic features with little pretraining data, but require orders of magniutde more data to learn to prefer linguistic generalization over surface ones (but it is slow…)Reformulating Unsupervised Style Transfer as Paraphrase GenerationPaper:https://www.aclweb.org/anthology/2020.emnlp-main.55/Presentation:https://slideslive.com/38938942Propose simple method based on fine-tuning pretrained language models on automatially generated paraphrase data + discusses weaknesses in automatic metrics of style transfer + release of 15M dataset of style transferthe 5th one: I found the talk of Emmanuel Dupoux at Conll very informativeYacine’s picks (@yjernite)ETC: Encoding Long and Structured Inputs in TransformersPaper:https://www.aclweb.org/anthology/2020.emnlp-main.19Presentation:https://slideslive.com/38938951/etc-encoding-long-and-structured-inputs-in-transformersHas local attention and a one global attention token per sentence which is trained with a contrastive loss similar to ICT.A* Beam SearchPresentation:https://slideslive.com/38939414/bestfirst-beam-searchA* algorithm is not quite as easy to batch as regular beam search, but leads to better and more diverse n-best.F2-Softmax: Diversifying Neural Text Generation via Frequency Factorized SoftmaxPaper:https://www.aclweb.org/anthology/2020.emnlp-main.737/Presentation:https://slideslive.com/38938686Pretty simple idea: groups tokens into bins of equal probability mass for a hierarchical softmax so the model can focus on choosing between candidates with the same prior. Leads to a nice improvement on human evaluation and generation diversity metrics.Towards Reasonably-Sized Character-Level Transformer NMT by Finetuning Subword SystemsComments:https://www.aclweb.org/anthology/2020.emnlp-main.203Presentation:https://slideslive.com/38938871Pre-trains on BPE and fine-tunes on full character decomposition to get the model to train faster.Towards Debiasing NLU Models from Unknown BiasesPaper:https://www.aclweb.org/anthology/2020.emnlp-main.613Presentation:https://slideslive.com/38938901Related to@VictorSanh’s recent paper: the “biases” tend to show up in easy-to-learn examples, so the model down-weight examples that are classified correctly early in training.Canwen’s picks (@canwenxu)Experience Grounds LanguagePaper:https://www.aclweb.org/anthology/2020.emnlp-main.703.pdfPresentation:https://slideslive.com/38938907This may be the paper that defines the future direction of NLP. What should a model learn and what ability should a model have? You can find a good guess from this paper.Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less ForgettingPaper:https://www.aclweb.org/anthology/2020.emnlp-main.634.pdfPresentation:https://slideslive.com/38938976Yes we know that fine-tuning a pretrained language model can bring the problem of forgetting.Mixoutis a valid solution but this EMNLP paper proposes an easy-to-use optimizer to resolve the problem.Do sequence-to-sequence VAEs learn global features of sentences?Paper:https://www.aclweb.org/anthology/2020.emnlp-main.350.pdfPresentation:https://slideslive.com/38939119It’s a little surprising to see this title cuz we all thought of course VAEs do. However, through well-designed experiments, the authors reveal the other side of this claim.Pre-Training Transformers as Energy-Based Cloze ModelsPaper:https://www.aclweb.org/anthology/2020.emnlp-main.20.pdfPresentation:https://slideslive.com/38939095It’s a really cool idea and it makes sense mathematically. Though the results are modest, there’re definitely more to explore.BERT-of-Theseus: Compressing BERT by Progressive Module ReplacingPaper:https://www.aclweb.org/anthology/2020.emnlp-main.633.pdfPresentation:https://slideslive.com/38938938Self-promoting. It’s a really neat idea that you can compress a model by simply replacing their components. No additional loss function needed.My picksLearning from Task DescriptionsPaper:https://www.aclweb.org/anthology/2020.emnlp-main.105.pdfPresentation:https://slideslive.com/38939344@VictorSanhmentioned this one but I want to include it as well. They create a new dataset trying to generalize from one set of tasks to another using only task descriptions w/o training data. It’s an ambitious idea to try to formalize and evaluate but I appreciated the work. I’m actually taking a break from adding their dataset “zest” toDatasets to compile this post, so it should be up very soon.Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a StartPaper:https://www.aclweb.org/anthology/2020.emnlp-main.660Presentation:https://slideslive.com/38939094Another approach to “universal” NLP w/ cross-task generalization. The idea here is to pose various tasks as one task (natural language inference) enabling transferability between tasks. Incidentally, the first author is the same who introduced theNLI-based zero-shotclassification approach which is roughly the same as the one we now use in ourzero-shot pipeline & API.Text Classification Using Label Names Only: A Language Model Self-Training ApproachPaper:https://www.aclweb.org/anthology/2020.emnlp-main.724Presentation:https://slideslive.com/38938946Similar to the “zero-shot” setup ofSchick et al.'s PET andYin et al.'s entailment-based approach (though they refer to it as “weak supervision” here). A nice difference from previous work is that they create groups of synonyms to a class label which can be used as a class representation instead of the class name alone. Another demonstration of self-training with unlabeled data only working well for classification.Experience Grounds LanguagePaper:https://www.aclweb.org/anthology/2020.emnlp-main.703.pdfPresentation:https://slideslive.com/38938907Really nice kinda philosophical paper about computational understanding of language. They lay out different “world scopes” to help think about different levels of understanding/experience. Reminiscent in some ways of Bender & Koller’s ACL paper this year,“Climbing towards NLU”and their superintelligent octopus. | 2020-12-02T15:01:10Z | [
{
"date": "2020-12-02T16:37:14Z",
"reply": "Especially like the linguistic shout-outs in there like Warstad et al. It’s always nice to see authors go back and see what (generativist) linguist theory has been saying for perhaps over sixty years and find ways to link that with how LMs “learn” grammar. I’ll be having some time off soon, can’t wait to catch up with all these latest developments! Thanks for the distillation, (pardon the pun)!"
}
] |
Meta Persona an abstract adaptive neural construct | https://discuss.huggingface.co/t/meta-persona-an-abstract-adaptive-neural-construct/2208 | 0 | 705 | TODO: Add description of the dataset here_DESCRIPTION = “Meta-Persona Dataset Object”\This new dataset is designed to solve this great NLP task and is crafted with a lot of care.[image|575x321, 75%](up load://nfoAskmJEO25xRecIlzlrXyeLh2.png)This is an “abstract-adaptive” card like dataset object I call ‘meta-persona’A meta-persona is an abstract neural construct like a fine tuned set of configurations including but not limited to:-dataset+metrics,-cache or full reset-scripts for split, separate, concatenate, or coalescence-and last but not least – a neural net search engine…The key here is how easily identifiable these configurations will be considering the neural construct would be a personified interactive adaptive UI card or webapp sized social network profile with a few quick change configurations that can be made on the fly without hard-coding.For instance; one quick change could be a drop down menu or toggle switch for:-use cache?or-clear cache full reset?The image below would be a good reference guide for the design of the adaptive card UIimage1021×507 126 KBEvaluation Metrics could be listed as a sequence of icons/emojis just to save space…Each 'Meta Persona is an abstract neural construct of datasets/virtual corpora/custom script/configurations.Fine tuned configurations like this:[image|690x381, 75%](up load://jMLQxZqyFRCklEv8vBo3YuTjIqd.jpeg)Keep in mind 1 persona is not enough to reach the goal. Multiple personas working in tandem will be needed. In essence each persona is single member in a team/group/deck or role playing game party that when taken as a whole represents;NLUideal use case scenariodesirable relationship simulationhold up…[image|598x375, 75%](up load://AcEhBtO1YZnJ9U5E5PenEjxzihr.jpeg)Lets just call the end goal – the front end of all this NLP pipeline[image|690x117](up load://fAN37rUtoyNnFIgDxG0PTP1uN44.jpeg)Lets call itan “Avatar” of and for the user.Anyways…It is important to mathematically or arbitrarily impose compatibility issues between personas.Some personas may have an attitude and are (strict) in the ‘magic’ role which prevents the user from choosing certain ‘support’ personas. I don’t mean that literally just used that as an example to explain the incompatibility of personas – like a give and take or a balance to strike. If the user does select incompatible personas it will be counter-productive as in diminishing returns either as a direct consequence or an arbitrary imposed consequence.On the flipside some personas are fundamental or happy go lucky personified as someone or something that is always ‘happy’ and gets along with everything…so open not strict. Does that make sense?If the designer cards are too small or too stylish something like this image below would suffice I think?. Including playful descriptions of the neural construct or just a succinct on the nose descript like this…[image|690x433, 75%](up load://ktAC5DyMttnRJQEbgnVnFTLUbHW.jpeg)For my avatar I would need;-Sherlock Holmes-Einstein or hawkings-Groot-Alan Watts-Jackie Chan----- probably should set a limit to 5 maybe? ------5 meta personas to generate 1 avatar or ideal use case. Each of the personas hand crafted meticulously built as abstract adaptive neural constructs all backed by a growing library of datasets which in turn are backed by .map() or Apache Arrow table data…FYI I am this…[image|514x441, 75%](up load://yZTlKWViA4XXcnZq8eT89o5fCqT.jpeg)Cute Baby GrootWhat better way to build a neural construct out of fictional character?It is going to be a lot of hard work. Indeed the whole huggingface team is grinding on adding datasets to the library right now as we speak.However…instead of fine tuning the fine tunes and potentially overfitting the whole construct what if building neural constructs was more about an artistic expression then some standard deviation.I suspect It may require a combination of figurative and declarative expressions to generate puzzles for users to assemble.Thus completing the loop.In essence the act of giving users a puzzle to put together puts the emphasis on the user to create the avatar by selecting their own preferred combination of personas into something the users can name and save/load like a VM state.I suggest that building meta-persona as abstract personifications of neural constructs is more valuable right now. I say get creative fine tune a construct for the sake of creative expression first and foremost and continue until there is a huge diverse library of personified neural constructs. I am uncertain if this s a technological breakthrough or perhaps a killer app like PR and marketing breakthru…This new user limitation threw off my whole pitch though… | 2020-11-25T18:27:03Z | [] |
Adding learnable coefficients for multi-objective losses? | https://discuss.huggingface.co/t/adding-learnable-coefficients-for-multi-objective-losses/2191 | 2 | 711 | I am running a multi-objective problem where I compute three losses and then sum them up. For each loss, I want to have a learnable coefficient (alpha,beta, andgamma, respectively) that will be optimized.optimizer = AdamW(model.parameters(), lr=2e-5, eps=1e-8)
for batch in dl:
optimizer.zero_grad()
result = model(batch)
loss1 = loss_fn_1(result)
loss2 = loss_fn_2(result)
loss3 = loss_fn_3(result)
# How to optimize alpha, beta, and gamma?
loss = alpha*loss1 + beta*loss2 + gamma*loss3
loss.backward()
optimizer.step()Specific questions:Should I even have coefficientsalpha,beta, andgamma? The optimizer will minimize, so they’ll all go to 0.0, right?If having those coefficients is a good idea, how can I prevent them from going to 0.0? Someone told me to use regularization, but what does that mean in this case?How do I declarealpha,beta, andgammato be learnable byAdamW? | 2020-11-24T20:11:55Z | [
{
"date": "2020-11-25T07:21:30Z",
"reply": "YesTheoretically, we have to make a constraint like alpha+beta+gamma = 1. To change this to unconstrained optimization, we have to use Lagrange multiplier to the constraint equation, and that will be the regularization formula your friend talked about e.g. you putlambda1*alpha, lambda2*beta and lambda3*gammainto loss function. I believe it complicates the problem even more since finding optimum values of lambdas are difficult even theoretically.2.5 Sorry not answer you Q3, but I think the practical way is to treat alpha, beta and gamma as hyperparameters and simply optimize them via grid search.In this case, simply split some of your training set to validation set, and define the metric on it. The “validation metric” has to be specified to be suitable to your problem (e.g. error, f1, spearman or any others) — you can get some ideas on metrics by finding some Kaggle competitions that is similar to your problem and see their metrics.Select hyperparaeters that optimize your validation metric."
},
{
"date": "2020-11-25T18:27:02Z",
"reply": "Theoretically, we have to make a constraint like alpha+beta+gamma = 1Thank you.Last night I was thinking of doingloss = alpha*loss1 + beta*loss2 + (1.0 - alpha - beta)*loss3which seems to be equivalent to what you wrote above."
}
] |
Inference on constrained devices | https://discuss.huggingface.co/t/inference-on-constrained-devices/2157 | 0 | 278 | Hi there,I am looking for any resources or any previous work on getting huggingface models to run inference on constrained devices. Since I read on your DistilGPT2 page that it “Runs smoothly on an iPhone 7.” I’ve been curious.Has anyone managed to get inference working on something like an RPi?Many Thanks,Chris | 2020-11-21T13:48:48Z | [] |
What are some popular datasets for domain adaptation in NLP | https://discuss.huggingface.co/t/what-are-some-popular-datasets-for-domain-adaptation-in-nlp/1931 | 1 | 458 | Hello,Having some experience in domain adaptation in CV but no NLP.Can someone recommend some popular datasets in NLP for DA?and even better for me if there is any in the hugginface datasets.Thanks! | 2020-11-07T17:30:18Z | [
{
"date": "2020-11-12T13:13:37Z",
"reply": "cc@yjernitemaybe here (and Angie which should be also on the forum by the way!)"
}
] |
Adding features to a pretrained language model | https://discuss.huggingface.co/t/adding-features-to-a-pretrained-language-model/770 | 3 | 3,817 | I’ve often thought about use cases where you think of word or sentence features that you knowmustbe helpful to the system. Features that you would typically use in an SVM or a shallow network. I would want to know if those features still have the ability to add to the performance of a pretrained language model. So rather than just fine-tuning the language model, what are good ways to integrate custom features into LM without pretraining from-scratch?My guess is that you can just take the output from an LM and add a custom head on top that also takes in these other features. So basically the output of the LM serves as another set of features. This does not seem ideal though, since the final connections might be too shallow, I imagine that a better approach is possible that still involves finetuning the LM along side training the network that the custom features are part of. Any thoughts or best “tried and true” methods out there? | 2020-08-19T15:58:02Z | [
{
"date": "2020-08-20T10:37:15Z",
"reply": "Hi Bram,One of my students studied exactly this phenomenon in a recent submission to SemEval: “UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information.” (https://arxiv.org/abs/2008.08547)Excerpts from the paper:We hypothesise that deep learning models, especially those that use pre-trained embeddings and so are trained on a small number of epochs, can benefit from corpus level count information. We test this on Sub-Task A using an ensemble of BERT and TF-IDF which outperforms both the individual models.For sub-task B, we hypothesise that these sentence representations can benefit from having POS information to help identify the presence of a target. To test this hypothesis, we integrate the count of part-of-speech (POS) tags with BERT. While this combination did outperform BERT, we found that a simpler modification to BERT (i.e. cost weighting, Section 3.5) outperforms this combination.And in terms of how the model was built:This ensemble model is created by concatenating the sentence representation of BERT to the features generated by the TF-IDF model before then using this combined vector for classification. In practice, this translates into calculating the TF-IDF vector for each sentence and concatenating it to the corresponding BERT output. This vector is then fed to a fully connected classification layer. Both BERT and the TF-IDF weights are updated during training."
},
{
"date": "2020-10-27T17:01:16Z",
"reply": "Have you solved the question ? I have similar demands."
},
{
"date": "2020-10-28T20:24:49Z",
"reply": "Hi@guoziyuanWe’ve since built on the previous work in the paper “Incorporating Count-Based Features into Pre-Trained Models for Improved Stance Detection” (https://arxiv.org/pdf/2010.09078.pdf). The code for this work is available athttps://github.com/Anushka-Prakash/RumourEval-2019-Stance-Detection/This work outperforms a RoBERTa baseline and achieved state-of-the-art results in stance detection by solving these problems (from paper):Pre-trained models, such as BERT, are often trained for between 2 and 5 epochs during fine-tuning whereas simpler feature based models need to be trained for much longer. Our experiments show that a simple ensemble of these models results in over-fittingThere are likely to be too many features to directly ensemble the raw features with pre-trained models (resulting in too much noise), a loss of important - task specific - information when using dimensionality reduction methods, and too few output classes to use only the outputs of a feature based model in an ensemble (lack of information)."
}
] |
Bart-base rouge scores | https://discuss.huggingface.co/t/bart-base-rouge-scores/683 | 11 | 1,685 | Has anyone finetunedbart-baseon xsum or cnn summarization task and willing to report the rouge score they got?I just got 15.5 for xum which feels low, since bart-large can get to 22 ish.@astariul@valhalla@VictorSanh? | 2020-08-11T19:00:38Z | [
{
"date": "2020-08-12T05:44:41Z",
"reply": "@sshleifer, could it be due to theadjust_logitsissue ? Just a guess but as I posted there, after modifying theadjust_logits_during_generationBLUE-4 score for my model went from 13.09 to 19.14 forbart-base"
},
{
"date": "2020-08-14T14:34:55Z",
"reply": "@sshleifercould you also try usingbosasdecoder_start_token_idand modifyingadjust_logits_during_generationto return logits as is instead of forcingbos? If you also get bump in ROUGE score we can confirm the issue. Thanks !"
},
{
"date": "2020-08-15T18:00:41Z",
"reply": "Possible suggestion that saves on the re-training could be to check the perplexity values and compare to paper"
},
{
"date": "2020-08-25T15:46:01Z",
"reply": "I got 16.6 ROUGE 2 on XSUM, in 3 epochs/ 6hrs"
},
{
"date": "2020-08-25T16:24:38Z",
"reply": "bart-base doesn’t seem to be good then, in my other seq2seq experiment t5-small performed similar/better to bart-base"
},
{
"date": "2020-09-05T18:22:20Z",
"reply": "Made a google doc to aggregate experiment results. Please add any interesting results!docs.google.comSeq2Seq Finetuning LeaderboardPlease state what you did and what you got Pegasus-large on xsum: \tReleased model: 46.87/24.46/39.15 (Rouge1/Rouge2/RougeL) finetuned: {\"rouge1\": 46.8248, \"rouge2\": 23.9987, \"rougeL\": 38.6751, \"n_obs\": 11333, \"runtime\": 4228.170863628387,..."
},
{
"date": "2020-10-27T12:08:43Z",
"reply": "How can I change the adjust_logits_during_generation ? thanks"
},
{
"date": "2020-10-27T14:33:43Z",
"reply": "By editing the code!"
},
{
"date": "2020-10-27T15:26:26Z",
"reply": "Can you provide a example ? I saw the source code ofadjust_logits_during_generationand it directly returns the logits."
},
{
"date": "2020-10-27T16:32:19Z",
"reply": "github.comhuggingface/transformers/blob/master/src/transformers/modeling_bart.py#L1100):return {\"input_ids\": None, # encoder_outputs is defined. input_ids not needed\"encoder_outputs\": encoder_outputs,\"past_key_values\": past,\"decoder_input_ids\": decoder_input_ids,\"attention_mask\": attention_mask,\"use_cache\": use_cache, # change this to avoid caching (presumably for debugging)}def adjust_logits_during_generation(self, logits, cur_len, max_length):if cur_len == 1 and self.config.force_bos_token_to_be_generated:self._force_token_id_to_be_generated(logits, self.config.bos_token_id)elif cur_len == max_length - 1 and self.config.eos_token_id is not None:self._force_token_id_to_be_generated(logits, self.config.eos_token_id)return logits@staticmethoddef _force_token_id_to_be_generated(scores, token_id) -> None:\"\"\"force one of token_ids to be generated by setting prob of all other tokens to 0 (logprob=-float(\"inf\"))\"\"\"scores[:, [x for x in range(scores.shape[1]) if x != token_id]] = -float(\"inf\")in the futuregit grep adjust_logits_during_generation"
},
{
"date": "2020-10-27T16:56:12Z",
"reply": "thanks"
}
] |
Load/save HF block sparse model | https://discuss.huggingface.co/t/load-save-hf-block-sparse-model/1646 | 1 | 385 | Hey everyone,I am exploringhttps://github.com/huggingface/pytorch_block_sparseproject. One of the issues that popped up almost immediatelly is loading a saved “sparsified” model. So, let’s say you sparsified Roberta using an exampleprovided. Now that the model has been sparsified (it’s linear layers replaced with BlockSparseLinear nn modules) how can I load previously saved model using HF ecosystem? All I can think of is that I again need to create a Roberta model with uninitialized weights, sparsify it, and the load weights with model.load_state_dict(torch.load(PATH))?Am I overlooking something obvious? | 2020-10-20T09:07:15Z | [
{
"date": "2020-10-21T13:47:50Z",
"reply": "No mechanism in place for loading as of now, which is ok. I sparsed the model again and loaded the weights manually via model.load_state_dict(torch.load(PATH))."
}
] |
Resume Training / Finetune a language model and further finetune a classifier | https://discuss.huggingface.co/t/resume-training-finetune-a-language-model-and-further-finetune-a-classifier/1616 | 1 | 1,179 | Hi,I would like to finetune a powerful classifier based on a pre-trained language model. As we know, the typical approach is to fine-tune a classifier using a pre-trained model. What I am wondering is that, if I fine-tune a pre-trained model based on a fine-tune language model settings using DS1(typical text dataset) (OR resume training from the last checkpoint) and then further fine-tune this newly fine-tuned model using another DS2(typical text dataset) for a classifier purpose, would this be a redundant effort as compared to a pipeline which is to just finetune a pre-trained model using DS2? I would like to receive your thoughts.Thank you. | 2020-10-19T00:51:45Z | [
{
"date": "2020-10-19T15:15:28Z",
"reply": "Hi, there are papers indeed indicate that “multi-steps” finetuning is helpful. Seethis paperfor one example ."
}
] |
Hyperparameter for distil bert | https://discuss.huggingface.co/t/hyperparameter-for-distil-bert/1624 | 0 | 652 | I’m reproducing the glue result for the paper “DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter”, and I now get MNLI-m dev set about 80 acc, and the score in paper is 82.Here is the parameter I’m using:epoch=3lr=2e-5batchsize=32*4cards.Can anybody share the hyper-parameter for the experiment in this paper? | 2020-10-19T11:30:27Z | [] |
Transformer for Abstractive Summarization for Chats Based on Performance | https://discuss.huggingface.co/t/transformer-for-abstractive-summarization-for-chats-based-on-performance/731 | 3 | 1,903 | Hi, I’ve some general questions related to Transfer Learning on pretrained models for summarization problem. I’ve been trying to engineer Seq2Seq model for Summarizing Chats between two user agents.I’ve tried T5 model (Pretrained & Transfer Learning), but the results were not satisfactory. The summarized text missed the context entirely after training on the custom dataset.Can someone please help me understand which model works better for summarizing chats or any pre-processing task that precedes this.Thanks in advance. | 2020-08-17T13:01:40Z | [
{
"date": "2020-08-17T15:26:49Z",
"reply": "Hi@anant0308! Happy to discuss possible approaches, but what works best (and whether you can expect good results at all) will depend on what your fine-tuning data looks like: for example, how long are the chats? do you have any gold summaries for your chats? do you have examples of summaries without corresponding chats? how many examples do you have? how are you representing speaker turns?Keep in mind that summarizing chats is quite a different task from summarizing news text: if the pre-training data lacks any kind of dialogue inputs, then the model will have to learn how to interpret multi-turn structure from scratch, which will probably be your main challenge."
},
{
"date": "2020-08-18T05:33:22Z",
"reply": "Hey@yjernite, the primary challenge as you mentioned is to identify the speaker and hence interpret the structure. The dataset is somewhat similar to (SAMsum corpus -https://arxiv.org/src/1911.12237v2/anc/corpus.7z).The following are the key points that might help -The summaries are there.The chats are similar to normal texts exchanged between two users.There are around 15K-20K training examples.Currently, the speaker is represented as is. (Based on Name)Kindly suggest the improvements for better implementation of abstractive summarization. Following are my key queries -Is there any preferred model for chat summarization?What might be the pre-processing steps for improvement in performance?How should speakers be represented as it was found that the contexts might be changed because of a speaker name being present in a sentence (ambiguity increased) ?Any suggestion would be of great help !"
},
{
"date": "2020-10-09T15:06:22Z",
"reply": "Did you ever find an improvement?I am trying to accomplish the sam thing with the SAMsum dataset"
}
] |
Evaluation metrics for BERT-like LMs | https://discuss.huggingface.co/t/evaluation-metrics-for-bert-like-lms/1256 | 3 | 4,339 | Hey guys,I’ve read that Perplexity (PPL) is one of the most common metrics for evaluating autoregressive and causal language models. But what do we use for MLMs like BERT?I need to evaluate BERT models after pre-training and compare them to existing BERT models without going through downstream task GLUE-like benchmarks.Best,Vladimir | 2020-09-24T20:16:34Z | [
{
"date": "2020-09-25T12:33:24Z",
"reply": "I found an interesting projecthttps://github.com/awslabs/mlm-scoringwhich seems to be the step in the right direction. The authors also published the paperhttps://arxiv.org/pdf/1910.14659v2.pdf"
},
{
"date": "2020-10-04T00:23:22Z",
"reply": "Hi Vladimir,before releasing new models, I usually perform evaluations for multiple checkpoints on at least two downstream tasks (normally Pos tagging or NER).But maybe you can also evaluate the MLM capability for some checkpoints, like it is shown in the following paper:GitHubGitHub - TurkuNLP/bert-evalContribute to TurkuNLP/bert-eval development by creating an account on GitHub.I would use the “Cloze test word prediction” task. It masks out some subwords from an input sentence, tries to re-construct the masked subwords and calculates accuracy. With that task you could at least measure the MLM capability of your checkpoints, without performing extensive hyper-parameter search and multiple runs as you do for down-stream tasks."
},
{
"date": "2020-10-05T14:58:52Z",
"reply": "Thanks a lot@stefan-itI see the project is using the old HF naming scheme but it shouldn’t be hard to update."
}
] |
Obtaining BERT-base from BERT-large | https://discuss.huggingface.co/t/obtaining-bert-base-from-bert-large/1288 | 3 | 438 | So I want to extract (prune) BERT-large such that I get BERT-base fairly. Initially I performed random pruning (near to 110M param count) on BERT-large but it didn’t seem to work well. L1 pruning seemed to work (nearly 131M param), but it doesn’t seem fair. Pre-training seems like a big hurdle given that there are some ambiguities on how to go about it. Please let me know if you’ve any suggestions on getting BERT-base fairly from BERT-large. | 2020-09-29T03:31:47Z | [
{
"date": "2020-09-29T15:22:13Z",
"reply": "Have you tried Distilling it?https://medium.com/huggingface/distilbert-8cf3380435b5.Why would you expect pruning to work?(Why do you want to extract bert-base from bert-large?)"
},
{
"date": "2020-09-30T04:32:35Z",
"reply": "Distillation is very different thing. What I want is to modify BERT-large such that it has the near same param count as BERT-base and the weight distribution matches that of BERT-base."
},
{
"date": "2020-10-02T10:53:52Z",
"reply": "What do you mean by “fairly”? Clearly, in order for a pruned bert-large to be effective, you need to prune those heads that are least useful. There isn’t really anything “fair” about that.What do you mean by “the weight distribution matches that of bert-base”? I shouldn’t think that to be possible. To start with, I’m pretty sure you will need to keep at least one head per layer, so that the data can flow through the model, and bert-large has 24 layers to bert-base’s 12. Which weights are you hoping to match? Furthermore, there’s no reason to suppose that the way the weights develop in bert-large will be similar to the way the weights develop in bert-base.Are you investigating this purely for the interest of it, or because you want to use the result?"
}
] |
How I fine-tune BART for summarization using large texts? | https://discuss.huggingface.co/t/how-i-fine-tune-bart-for-summarization-using-large-texts/1266 | 3 | 3,722 | Good night!I’m using a pre-trained Bart for summarization and I have my own dataset for fine-tuning (which has a set with the big text and its respective summary). Despite this, my input texts are approximately 2500 characters long and the maximum Bart accepts is 1024. Is there any technique I can use to use all text? I thought of splitting each cell into smaller texts (max 1024) and assigning the same summary to each. Makes sense?Example:Before:ABC: summary1DEF: summary2After:A: summary1B: summary1C: summary1D: summary2E: summary2F: summary2Thanks in advance! | 2020-09-27T04:02:21Z | [
{
"date": "2020-09-27T07:00:53Z",
"reply": "Hi, there’e already thread for this, you might find it helpfulSummarization on long documents🤗TransformersYou can try extractive summarisation followed by abstractive. In the extractive step you choose top k sentences of which you choose top n allowed till model max length. \nAnother way is to use successive abstractive summarisation where you summarise in chunk of model max length and then again use it to summarise till the length you want. This method will be super expensive. \nYou can also combine first + second method."
},
{
"date": "2020-09-27T15:17:30Z",
"reply": "Do you have any idea how I can do this extractive summarization before? I would have to cut my text in half to be the ideal size, but I don’t know how to get the most relevant sentences in this extractive step."
},
{
"date": "2020-09-28T06:15:41Z",
"reply": "could you post this question in that thread, people there might have tried this, let’s keep the long summ discussion in one thread"
}
] |
New seq2seq tool: search hparam space with run_eval.py | https://discuss.huggingface.co/t/new-seq2seq-tool-search-hparam-space-with-run-eval-py/1166 | 5 | 341 | FYI, there is a new tool available to you - you can now search the hparam space withrun_eval.py.It’s calledrun_eval_search.pyIt uses the same arguments asrun_eval.py, but allows you to parametrize the hparams, so in addition to the normal args you can pass:--search="num_beams=8:11:15 length_penalty=0.9:1.0:1.1 early_stopping=true:false"and it’ll search all the possible combinations and at the end print a table of results sorted by the scores of the task. e.g.:bleu | num_beams | length_penalty | early_stopping
----- | --------- | -------------- | --------------
41.35 | 11 | 1.1 | 0
41.33 | 11 | 1.0 | 0
41.33 | 11 | 1.1 | 1
41.32 | 15 | 1.1 | 0
41.29 | 15 | 1.1 | 1
41.28 | 15 | 1.0 | 0
41.25 | 8 | 1.1 | 0
41.24 | 11 | 1.0 | 1
41.23 | 11 | 0.9 | 0
41.20 | 15 | 1.0 | 1
41.18 | 8 | 1.0 | 0You can have one or more params searched.Here is an example of a full command:PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval_search.py \
facebook/wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt \
--reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json \
--bs $BS --task translation \
--search="num_beams=1:5 length_penalty=0.9:1.1 early_stopping=true:false"If you encounter any issues please let me know.It’s documented here:https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md#run_eval-tips-and-tricks.@sshleiferand I added some more goodies inrun_eval.py- you will find them all documented at that url.Enjoy.p.s. edited to remove things that are going to change based on Sam’s comment below. | 2020-09-16T20:21:53Z | [
{
"date": "2020-09-16T22:07:28Z",
"reply": "Great work!There are only two possible sets of keys to get fromrun_eval.pysincescore_fn = calculate_bleu_score if \"translation\" in args.task else calculate_rougeYou shouldn’t hard code the possible tasks any more than that IMO."
},
{
"date": "2020-09-16T22:25:01Z",
"reply": "ah, thank you for clarifying that - I will adjust it to follow the same logic."
},
{
"date": "2020-09-17T06:04:14Z",
"reply": "This is awesome ! Thanks@stas"
},
{
"date": "2020-09-17T11:31:32Z",
"reply": "I haven’t checked the code, I’m on mobile now. But are there many scenarios where we actually need to do hyperparameters search on theevaluation/inference side? In addition, does this use the optuna implementation that is being worked on in the trainer by@sgugger, or is it a separate implementation?"
},
{
"date": "2020-09-17T12:11:00Z",
"reply": "When you train a seq2seq model on new summ or translation dataset or other seq2seq task and want to decide how many beams to use, should use length penalty or not, what should be the max seq length, what should be theno_repeat_ngram_sizeetc, all of these parameter affect the metrics , so this tool helps to make those decisions,It does not useoptuna,it just usesitetools.productto enumerate the different combinations and evaluate on them"
}
] |
Not all BLEU scores were created equal | https://discuss.huggingface.co/t/not-all-bleu-scores-were-created-equal/1154 | 0 | 299 | While porting fairseq transformer and another model from allenai, I wasn’t getting the same BLEU scores as reported by the papers. At the end I learned that some of that difference was due to the fact that I was measuring the BLEU score in a different way from theirs. So when you see a BLEU number in a report, it could mean many different things. e.g. apparently you get a higher score if you measure tokenized outputs.Please see this paper for many more nuances:arXiv.orgA Call for Clarity in Reporting BLEU ScoresThe field of machine translation faces an under-recognized problem because of
inconsistency in the reporting of scores from its dominant metric. Although
people refer to "the" BLEU score, BLEU is in fact a parameterized metric whose
values can vary...In your work and experiments, please, try to usesacrebleufor measuring as suggested in the paper. That’s what our seq2seqeval_run.pyuses.Thank you. | 2020-09-15T23:45:09Z | [] |
Bertology-like Analysis for BART, T5? | https://discuss.huggingface.co/t/bertology-like-analysis-for-bart-t5/941 | 0 | 661 | In my current project, I am working on training encoder-decoder models (BART, T5, etc.) and the Transformers library has been absolutely invaluable! After seeing several Bertology analyses (i.e. looking at the information the model’s attention mechanism learns to attend to), I would like to know if a similar analysis is possible with the BART and T5 models in the Hugging Face library. Any recommendations are certainly appreciated! | 2020-08-31T07:55:01Z | [] |
BART question, it seems that pretraining is not work for a small model? | https://discuss.huggingface.co/t/bart-question-it-seems-that-pretraining-is-not-work-for-a-small-model/511 | 6 | 550 | What is your question?My task is to generate keywords from sentences.I pretrain a text-generation model. I mask the sentences’ tokens and predict the whole sentences’ tokens.Pretraining batch_size = 8 and step = 1000000I haven’t observed improvement from pretraining. BLEU score is 10.5 for not pretraining, BLEU score is 9.5 for pretraining.CodeI take the python code fromgithub.comgoogle-research/pegasus/blob/master/pegasus/models/transformer.py#L38from pegasus.layers import attentionfrom pegasus.layers import decodingfrom pegasus.layers import embeddingfrom pegasus.layers import timingfrom pegasus.layers import transformer_blockfrom pegasus.models import baseimport tensorflow as tffrom tensorflow.contrib import layers as contrib_layersclass TransformerEncoderDecoderModel(base.BaseModel):"""Transformer encoder+decoder.Notations:B: batch_size, I: max_input_len, T: max_target/decode_len, D: hidden_sizeV: vocab_size"""def __init__(self, vocab_size, hidden_size, filter_size, num_heads,num_encoder_layers, num_decoder_layers, label_smoothing,dropout):hidden_size = 512num_encoder_layers = 3num_decoder_layers = 3DiscussionThe task is to generate keyword from sentences.The keyword may not appear in the sentences.So input masked sentences to predict whole sentences, it is not benefit the keywords generation task.Input masked sentences to predict whole sentences, it do not have relation to the keywords generation task.Am I right? Is it the reason that pretraining do not improve the BLEU score?Thank you very much. | 2020-07-29T08:15:53Z | [
{
"date": "2020-07-29T08:49:07Z",
"reply": "With all due respect, you are asking a question on a forum dedicated to a specific librarytransformersby HuggingFace, but the question does not involve that library. In fact, you are using a completely different library. I am not sure if this is the right place for such questions.@sgugger"
},
{
"date": "2020-07-29T09:06:50Z",
"reply": "I have changed the tag."
},
{
"date": "2020-07-29T13:25:33Z",
"reply": "On the research part of the forum, we welcome any general questions, though of course we would prefer you to use our models@sshleifermight have some answer as he is the Bart person on the team."
},
{
"date": "2020-07-29T14:21:42Z",
"reply": "guotong1988:So input masked sentences to predict whole sentences, it is not benefit the keywords generation task.Input masked sentences to predict whole sentences, it do not have relation to the keywords generation task.Am I right? Is it the reason that pretraining do not improve the BLEU score?Thank you very much.Definitely possible, there could also be a bug in your code. I don’t have enough familiarity with your task to know what results to expect."
},
{
"date": "2020-07-30T01:41:44Z",
"reply": "Thank you. I am also using your models."
},
{
"date": "2020-08-03T03:10:01Z",
"reply": "1, I pad some zeros in the input tokens for multi sentences. The output positions of output tokens should be exactly same to the input tokens, which means I should keep the padding zeros in the output tokens.2, The pretraining time should be longer."
}
] |
Why are segment and position embeddings so large? | https://discuss.huggingface.co/t/why-are-segment-and-position-embeddings-so-large/254 | 2 | 1,507 | Cross-post from:Size of feature embeddings (and some digression about casing methods) - Development - OpenNMTThese days I am part-time doing work on improving translation models. We are working with regular transformer seq2seq networks using OpenNMT. This question is not about OpenNMT but it was triggered by going through its documentation. In onmt one can addfeatures to each word. These features are then used to train their own embedding. For example, if you want to train a lower case model but still want to give importance to casing, you can add a casing feature that indicates whether the word was lower case or not.i│C like│l cookies│l from│l new│C york│CThis will create two embedding layers under the hood. One for the tokens, and one for the case features.Intheir documentation, they state that the default size for features is… set to N^feat_vec_exponent where N is the number of values the feature takes.where the default feat_vec_exponent value is 0.7.However, that means that for two features,they would only get a size of 1 or 2(1.6). The embeddings (token and casing) are thenconcatenated. This contrasts sharply with the language models that I know. Take for instance,BERT, which has token (30k values), segment (two values), and position (512 values) whichallhave 512 dimensions, even the segment embeddings. These embeddings aresummed.My question thus ends up being: I always thought that the number of items in the embedding should more or less dictate the hidden size of that embedding (as onmt suggests), but BERT and siblings do not do this. So what is the best way, and why? How come that only two features in a 512 dimension space make sense? | 2020-07-13T15:40:03Z | [
{
"date": "2020-07-29T09:29:33Z",
"reply": "It’s actually more a question of projecting in a high-dimensionality dense vector space versus a sparse space rather than the dimensionality it-self.A lot of the recent developments in NLP are about projecting labels and tabular data in a high-dim vector space (assigning learned vectors to spare categorical features) prior to computation.One striking demonstration of the efficiency of casting in high-dimension is in the work of John Wieting and Douwe Kiela:https://openreview.net/forum?id=BkgPajAcY7but there is also a much older history of work on random projections and the Johnson-Lindenstrauss lemma:https://scikit-learn.org/stable/modules/random_projection.htmlA related discussion on the JL lemma you may want to join is here:https://github.com/huggingface/awesome-papers/discussions/7Note however that there is a limit in the optimal dimension for the input embedding and recent models like ALBERT (https://openreview.net/forum?id=H1eA7AEtvS) or approach like Adaptive inputs (http://arxiv.org/abs/1809.10853) keep the input dimension smaller the models hidden-size to reach more optimal ratio between both of these dimensions."
},
{
"date": "2020-08-02T11:59:57Z",
"reply": "Thanks for your reply! I read through the reading group’s thread as well as the Linformer. From what I understand, the biggest problem with projections in large spaces is speed. On the other hand, large, random initialisations perform well out-of-the-box. One would guess, then, that the middle ground is finding trained, smaller dimension feature spaces, leading to a balanced trade-off between speed and performance.However, there is still a big difference in sizewith respect to the inputbetween the two examples that I mention. So let’s assume we have a feature with two possible values (e.g. segment IDs, 0 or 1). In onmt this would be encoded (by default) in a space of two values, and one dimension. In BERT, though, it is much larger: two values, but 512 dimensions. What I am interested in is not only the difference between having 1 dimension vs 512, but also how this is motivated in BERT. In BERT (and siblings) there is no constraint between input size of the embedding and its dimensions. 30k vocabulary, 512 positions, 2 segments. All get the same dimensions so they can be summed. I still have not seen any evaluation on this research question that comes down to:is/should the quality of a vector space determined by the size of its keys? The problem to evaluate this, I think, is that in language models these spaces are not trained separately but as part of the whole model. Therefore it is hard to make statements about the embeddings themselves.As an update about my own research: we found that having a 4-values, 6-dimensions feature, concatenated to a 506 token embedding performsbetterthan summing 4-values, 512-dimensions to a 512-dimension token representation."
}
] |
Understanding what went wrong in attention | https://discuss.huggingface.co/t/understanding-what-went-wrong-in-attention/386 | 5 | 1,579 | I am working on attention analysis. I want to learn more about where self attention made mistakes while attending to context query. Given two sentences, I am interested in learning more about where self-attention should have paid more attention (and not irrelevant tokens) to provide correct answers. In general, what went wrong in processing a given sample even if fine-tuned transformer is employed.While there are projects based on visualization likeBertViz,ExBERT, I am not sure if it’s straightforward to extract the information I’m looking for.Do you know of any good projects, or workarounds inTransformersto answer my query ? | 2020-07-20T05:27:08Z | [
{
"date": "2020-07-21T03:30:49Z",
"reply": "Can anyone point me to the method on how to visualize attention in matrix form between query and context sentences ? Is there any other alternative ? Any pointers will be appreciated."
},
{
"date": "2020-07-24T18:48:54Z",
"reply": "The two you mentioned are the only I know of off the top of my head in terms of visualization tools. What are you trying to do thatBertVizandExBERTdon’t provide? (disclaimer: not an expert in this area)One tricky thing is that the notion of where the modelshouldorshould nothave paid more attention is not well defined. There’s been debate about whether attention weights can or should be used for interpretation, for example see[1],[2]. Coming up with a convincing argument that a given attention matrix should be one way or the other would probably not be trivial."
},
{
"date": "2020-07-25T04:52:15Z",
"reply": "Thanks for your helpful reply. I had a look at their abstracts, and do not have a firm opinion on whether attention can fully help us understand what I’m looking for.Both are good tools for interactive visualization but I want something that provide some quantifiable-ness. For now, I’m usingsrushway of visualizing attention heatmaps like he did inAnnotated Transformer. Since I need to report in the paper, I am looking for static visualizations.Based on my little interaction withexbertlive demo, it can be hard for the reader to distinguish between what both models are looking at (for comparsion purposes).For my use-case, I want the reader to be able to distinguish what two networks look at and how one is better than other. I hope it makes some sense."
},
{
"date": "2020-07-29T07:34:23Z",
"reply": "@joeddavCould you please suggest what’s the recommended way to do what Exbert does with our own weights (seeing which token in sentence the model pays attention to) ? HF Exbert works for default pretrained LMs, I want my trained weights to be used for inference task. I’m running experiments on server, building npm and other stuff seems like a lot of work, but I think things may have changed a bit after introduction on HF inference API. I’m usingbert-base-uncased(pretty standard), want to load weights from HF model hub instead."
},
{
"date": "2020-07-31T14:19:04Z",
"reply": "Got it working by using exBERT locally."
}
] |
ACL 2020 highlights – Joe | https://discuss.huggingface.co/t/acl-2020-highlights-joe/188 | 3 | 1,556 | I had a great time at ACL this week. There are many great papers and I’m still going through them. Here’s a summary of just a few that I wanted to highlight. I’d love to get thoughts and retorts from anyone reading!“To Test Machine Comprehension, Start by Defining Comprehension”by Jesse Dunietz, Gregory Burnham, Akash Bharadwaj, Owen Rambow, Jennifer Chu-Carroll, and David FerrucciLike most great ideas, the framework presented here is simple – seemingly obvious, even. They take a specific look at Machine Reading Comprehension (MRC) and argue that current evaluation metrics don’treallyinspire much confidence in the system’s comprehension of the relevant information in the passage to make it trust it in any real-world setting. They argue that rather than making questions harder, we should explicitly defining so-called “Templates of Understanding” to measure the different dimensions of comprehension within a particular context. For example, in the context of a story, they lay out the following ToU:image2190×1140 238 KBThe authors do a great job thinking with clarity and simplicity about how we should approach evaluating MRC systems.“Intermediate-Task Transfer Learning with Pretrained Language Models”by Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut,Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann,Samuel R. BowmanRecently the pre-train/fine-tune paradigm has become ubiquitous. This paper explores whether we can take advantage of labeled data during anintermediatetraining step. The authors do really extensive analysis on what kinds of datasets are useful for intermediate training and what downstream tasks they have a positive (or negative) effect on.image1200×574 58.1 KBA really interesting insight for me is that commonsense tasks don’t ever seem to have a negative effect. They either help on the downstream task, or don’t have much of an effect at all. I wonder if this because we do havelabeledcommonsense data that is used, or if we could build some kind of unsupervised commonsense objective into the pre-training procedure that would work just as well.“Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data”by Emily M. Bender and Alexander KollerThis paper is not focused around any one method or technique, but rather makes a general and pretty bold argument: meaning cannot be learned from form. In other words, just giving a model access to a whole bunch of text will never be enough to learn meaningfully about the real world.Whether you buy their argument or not, I found it to be an intellectually stimulating presentation. I suspect the hyperintelligent octopus argument will be one that sticks around for a long time.image2802×1558 296 KBI also appreciated their word of caution about the way we use different words when communicating about a model’s capabilities. At the very end of the presentation, Alexander warned,As a community, let’s remember that we’re scientists and not marketing people. Let’s be a little bit careful when we use terms like understanding, meaning, and comprehension. | 2020-07-10T16:08:17Z | [
{
"date": "2020-07-11T03:12:41Z",
"reply": "Intermediate Task Transfer is a very practical one. It exhaustively provides many results that can help engineers save much time."
},
{
"date": "2020-07-12T02:14:53Z",
"reply": "Thanks so much Joe@joeddav, I was trying to catch up all tutorials and workshops within my limited time, and I almost miss these extremely interesting papers.I just finish watching the first paper (To Test Machine Comprehension, Start by Defining Comprehension – in a sense it has the same spirit as the CheckList best paper), and found that their slides and rocket discussions are very valuable! Sadly, these materials will be deleted soon, so I took a quick cap screens of some slides and would like to post supplementary materials mentioned in Rocket Chat here. Hopefully it can be useful for other people.Temporal access to dataset used in the paper :https://drive.google.com/file/d/1jXU__4BDDbofWKhZiYKfoIJOVbTv7AfQ/view?usp=sharing.Related papers suggested by Sowmya VajjalaDeveloping reading comprehension questionshttps://nflrc.hawaii.edu/rfl/April2005/day/day.htmland commneted by Jesse (the author)My initial reaction is that the progression of “types of comprehension” listed there lay out a massive challenge for scaffolding up MRC to richer abilities. I don’t think people have been explicit about generating questions according to these categories, but many of them do appear in MRC datasets. Mostly people seem to focus on literal comprehension, throwing in reorganization/inference when they want to make the test harder. Prediction is sometimes tested as part of commonsense reasoning (e.g., Story Cloze).As for how these categories relate to ToUs, I think it would mostly be as forms of error analysis. You’d establish in advance that you want your system to figure out from this text that Maria died at age 55, and then when it succeeds/fails, you’d want to count that in the “reorganization” bucket. I’m not sure how important the categories would be for generating questions, though—our argument is that questions should be generated in accordance with what content downstream applications need, not what mode of reasoning would be needed to get there.Reut Tsarfaty asked a great question on ‘motivational’ perspective :I am particularly interested in the \"motivational’, It seems you conflate it with “what if”, but this is a very small fragment of motivation sources. Motivation can come from goals (“teleological”) “We are set to achieve our financial goals at Q2”, personal prefs (“buletic”) “I prefer to sit outside”, morals (“deontic”) “you should not drink and drive”, and more. Did you have thoughts on structuring this space of (sources of) motivations for the prescribed events?And the author replied in some valuable thoughts :Thanks, Reut, and great question! You’ve put your finger on a point our exposition glossed over. We do actually allow for all of the types of motivation your listed, though there are probably others we haven’t yet encountered and will have to figure out how to handle.In our scheme, any given explanation, whether mechanistic or motivational, has three main structural elements:The “root cause.”A series of “causal links” connecting the root cause to the outcome (as shown in Fig. 2 of the paper).The recursive explanations for the root cause and for each causal link, each of which consists of a) a general causal rule (“most dogs prefer not getting rained on to getting rained on”) and b) supporting facts that establish the causal rule applies (“Rover is a dog”).In motivational explanations—i.e., explanations where an agent is portrayed as taking a deliberate action—the root cause is always some form of preference over states expected to follow or happen concurrently with the action. In that sense, it does indeed have to be some sort of “what if”—e.g., if Timmy doesn’t take this action, he won’t get to sit outside. But the preference can be any form of desirability/undesirability. Here’s how we might handle the cases you listed:Joanna would prefer that the organization achieve its Q2 financial goals than that it fall short of them.Timmy would prefer sitting outside rather than inside.Alice driving drunk would violate her moral standards, whereas driving in a normal state of mind would not.…and each would be recursively explained in terms of some general rule about what makes people consider such things desirable/undesirable. In the final case, that would probably mean stating that people generally think driving drunk is immoral.Now, theoretically each statement of preference should be connected to the corresponding action by a general rule—e.g.:Joanna cancels the event, rather than leaving it scheduled, because:Joanna would prefer that the organization achieve its Q2 financial goals than that it fall short of them.Joanna expects that;<imagined causal chain connecting canceling/not canceling to meeting/falling short of goals>When an agent prefers outcome X to outcome X’, and they believe action A will lead to outcome X whereas action A’ will lead to outcome X’, they often take action A instead of action A’.*But it’s unwieldy to include such a foundational piece of agentive behavior in every motivational explanation, so we allow annotators to assume it. Currently we have a small list of such general rules that annotators can assume:• Agents act to realize their preferences.• Agents act to fulfill their obligations.• Agents act to conform to their moral standards.(These are shorthand versions of the more unwieldy contrastive rules.)I believe it’s that list that you were correctly pointing out we need; is that right?and more :The possible-worlds notion is definitely underlying our whole approach to describing causality and motivation: we’re assuming a Lewis-like notion of a nearby possible world where everything is the same except for one little tweak. (Important differences include that we don’t care whether possible worlds are metaphysically “real” and that we sometimes consider multiple nearby worlds if there are multiple salient contrasts.)So far we’ve been sticking with plain English as the annotation format, so that we can work out all the content and conceptual structures intuitively without first committing to a formalism. That makes explicit formal semantics hard to incorporate. But in other corners of Elemental Cognition—particularly the ones working on systems that can actually _produce_ answers like this—we are indeed doing some formal representation, and we’ve discussed the need to represent various kinds of irrealis contexts, including the alternative possible worlds evoked by causal chains.Lastly, Emily Bender (the author of the last Octupus-argument paper that@joeddavmentioned) also joined the discussions. But I am not sure I should post them here since they are extremely long (50+ replies)"
},
{
"date": "2020-07-30T05:57:51Z",
"reply": "@joeddavStunningly, regarding the Octopus paper (Bender & Koller 2020) which contains a challenge on “advice on Bear chasing”, Gwern has tested this example with GPT-3, and found that GPT-3 can make many valid suggestions to deal with a beargwern.netGPT-3 Creative FictionCreative writing by OpenAI’s GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling. Plus advice on effective GPT-3 prompt programming & avoiding common errors."
}
] |
Debiasing models by HEX projection | https://discuss.huggingface.co/t/debiasing-models-by-hex-projection/473 | 1 | 513 | I am interested in implementing the orthogonality portion ofTowards Robustifying NLI Models Against Lexical Dataset Biasesfrom ACL 2020 in Pytorch.The overall idea seems simple. Have a primary model and a sub-model (like BOW) to detect superficial features, and then use HEX projection (Wang et al., 2019a) to project the representation of the original primary model to the orthogonal space of the representation of the BoW model.In this case, I would use a transformer as the primary model. I’m not sure about the implementation of HEX projection. If someone is familiar with it, it would be really helpful if they can share the snippet responsible for projecting the representation orthogonally.Additionally, adding adebiasingexample in Transformers repo would be a good addition which I’m happy to add, once I implement myself. | 2020-07-25T16:47:25Z | [
{
"date": "2020-07-28T05:36:55Z",
"reply": "I figured out myself from the equations in the paper (Wang et al.). My implementation seems to be working. Will share the link for my repo once I open source the code."
}
] |
What does it mean to prime a GPT model? | https://discuss.huggingface.co/t/what-does-it-mean-to-prime-a-gpt-model/446 | 5 | 4,078 | I am not sure I understand what it means to prime a LM. I came across this concept in several blogposts and papers (sometimes also referred to as exploring the capabilities of meta learning of the model or as in context learning).From the openai gpt2 paper, section3.7 Translation:We test whether GPT-2 has begun to learn how to translate
from one language to another. In order to help it infer that
this is the desired task, we condition the language model
on a context of example pairs of the format english
sentence = french sentence and then after a final prompt of english sentence =This I believe is an example of priming? Since with transformers there is no concept of hidden state being passed from one step to another, we provide the model with an input sequence of tokens of up to 1024 length and the model will output up to 1024 x vocab size softmax activations where each will encode the probability of the subsequent word (following the word at a given position). So priming would be just constructing the input sequence in a specific manner?If I am reading this correctly, priming would refer to the act of passing a sequence into the model expecting that the model’s meta learning capability would affect its output?In this sense, for priming, we are always limited to a sequence of < 1024 tokens (where 1024 need to suffice for the priming sequence and the output)?Passing thepastparameter just saves on compute, it provides the model with the key value pairs calculated at earlier steps of text generation but there is nothing else magical happening there?And last but not least - are such questions okay to ask? Meaning, this would certainly qualify as a beginner question, but it doesn’t directly relate to the library I suppose. I really appreciate the amazing resource you put out there, the transformer library along with the wonderful documentation, in fact I am blown over by how awesome it is, just would like to make sure I am not bothering you with my questions and am using the forums in a way that they were intended to be used.Thank you very much! | 2020-07-23T16:17:32Z | [
{
"date": "2020-07-24T22:07:41Z",
"reply": "If I am reading this correctly, priming would refer to the act of passing a sequence into the model expecting that the model’s meta learning capability would affect its output?You’ve nailed it on the head. When talking about a left-to-right model like GPT-N, priming is just prepending text that is similar in some way to the text you are predicting which often helps the model to predict it correctly.Incidentally, this is the thing that GPT-3 seems to be especially good at. There seems to be something about language models that we don’t completely understand that can make priming a surprisingly effective meta-learning technique, especially when the models get really big. Seethis Twitter threadfor some examples.And yes, this kind of question is perfect for the forums. However, I’d sayResearchis probably a better category fit since this more about general NLP/research talk and rather than the HF libraries"
},
{
"date": "2020-07-25T07:32:55Z",
"reply": "Thank you very much for your answer Joe, really appreciate it!And thank you for linking to the Twitter thread - super interesting. Will keep note of theResearchcategory going forward!"
},
{
"date": "2020-07-25T20:11:19Z",
"reply": "Just as an informative comment: priming is actually a term from psychology and perhaps peculiarly psycholinguistics. I am doing some research into this. An example of priming is: if you show participants a whole number of sentences, and most of those use a passive construction (“The apple was eaten by the man.”), and then show them a picture and ask them to describe it, and they describe what they see with a passive then they were (unconsciously)primedby the earlier texts."
},
{
"date": "2020-07-27T07:40:24Z",
"reply": "They used the term ‘condition’ but it’s of course not truly conditional compared to methods like CTRL and PPLM. So referring to it as ‘priming’ might be a great choice."
},
{
"date": "2020-07-27T13:17:40Z",
"reply": "Personally I use them interchangeably in this context. I have a slight preference for “priming” because IMO it’s more evocative in communicating what you’re trying to accomplish with this particular kind of conditioning, but I think either works (conditioning is probably more common?)."
}
] |
Attaching TF models to CNN features | https://discuss.huggingface.co/t/attaching-tf-models-to-cnn-features/391 | 1 | 438 | This may be not entirely about NLP.I am working on Image captioning and learning textual representations from CNN features.Idea is to train CNN using captioning. So, I tried to use GPT-2 tokenizer but I had to create Captioning model from scratch.Is there any way to attach TF Transformer models to other CV applications for better learning?My VirTex implementation in Keras | 2020-07-20T08:43:42Z | [
{
"date": "2020-07-24T18:59:52Z",
"reply": "Hey@surajp. Sorry, I’m not familiar enough with VirTex to give a concrete response here. But our TF models are compatible with TF2/Keras, so you should be able to include them in your TF graph. If you’re having trouble with this, please post some more specifics and I’ll see if we can be of any more help."
}
] |
Is it reasonableto pretrain by masking certain dimensions of each vector, rather than the individual token? | https://discuss.huggingface.co/t/is-it-reasonableto-pretrain-by-masking-certain-dimensions-of-each-vector-rather-than-the-individual-token/290 | 3 | 456 | Let’s say I want to adapt Transformers to a non-NLP task, like financial data or a multiplayer online video game. You can imagine that the high-dimensional vector of each input will contain information that pertain to different events. For example, the first 10 dimensions might describe player 1, and the next 10 dimensions might describe player 2.If I were to extend the pre-training exercise to these non-NLP tasks, I think it could be reasonable to mask the actions of certain players in order to predict back their actions. This would essentially involve masking certain dimensions of a vector rather than masking the entire “input”.My question is: is this reasonable to do and is this even the right approach? | 2020-07-15T03:11:56Z | [
{
"date": "2020-07-15T14:55:13Z",
"reply": "I don’t know what kind of input embeddings you’d be working with in that case, but the problem you’ll probably run into is that latent embeddings are usually not as nicely disentangled as you’ve described here. We sometimes talk about them as if they were for illustrative purposes, but in reality your description of “player 1” is probably distributed across the entire vector rather than existing entirely in some subset of vector positions."
},
{
"date": "2020-07-16T00:25:38Z",
"reply": "Hi Joeddav,Naive question here, but rather than learned embeddings like in the case of words, if I directly create the input vector such that player1’s actions can be described via dimensions 1-5, player2’s actions are described via dimensions 6-10 and etc, then does that mean that each player’s information is disentangled by design?If so, would my question become a reasonable one or is there another way to encode multi-player information in a Transformers model?Thanks so much"
},
{
"date": "2020-07-21T19:52:50Z",
"reply": "Sure, that sounds like a reasonable thing to try. Let us know how it goes – I’m sure we’d all learn something"
}
] |
Print All Tokens Over a Certain Probability Threshold | https://discuss.huggingface.co/t/print-all-tokens-over-a-certain-probability-threshold/329 | 3 | 1,059 | I am curious to know how I would do this using GPT-2. Thank you for your time! | 2020-07-16T20:12:49Z | [
{
"date": "2020-07-20T14:19:41Z",
"reply": "Hi there, here is a quick way to do this for the last token on a given sentence in PyTorch:from transformers import GPT2LMHeadModel, GPT2Tokenizer\nimport torch\nimport torch.nn.functional as F\n\n# Load model and tokenizer\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\n\n# Input example\ninput_txt = \"Hello, my name is Sylvain.\"\ninputs = tokenizer(input_txt, return_tensors='pt')\noutputs = model(**inputs)\n\n# If you are not on a source install, replace outputs.logits by outputs[0]\npredictions = F.softmax(outputs.logits, dim=-1)\n\nthresh = 1e-2\nvocab_size = predictions.shape[-1]\n\n# Predictions has one sentence (index 0) and we look at the last token predicted (-1)\nidxs = torch.arange(0, vocab_size)[predictions[0][-1] >= thresh]\nprint(tokenizer.convert_ids_to_tokens(idxs))"
},
{
"date": "2020-07-21T00:57:53Z",
"reply": "I can’t thank you enough for your detailed response! I apologize if I am asking too much of this forum, but given I have this question I am sure others would benefit from an answer as well.While on this topic, I wonder what steps would need to be taken to expand this function for the output to include phrases in addition to words."
},
{
"date": "2020-07-21T16:44:52Z",
"reply": "I don’t think the generate method can return the probabilities, so you might have to tweak the generate function to return them."
}
] |
Building a custom Squad 2.0 style dataset, is it worth it? | https://discuss.huggingface.co/t/building-a-custom-squad-2-0-style-dataset-is-it-worth-it/398 | 3 | 968 | Was wondering what the experts think and whether this is a sensible approach. The pre-trained Squad 2.0 models perform well in a custom domain, but can be greatly improved, given the target domain is rather narrow and the vocabulary is different but there is overlap.Do you think it is worth obtaining a custom dataset, say 1000 observations, using the same methodology as Squad v2.0 but derived from data of the target domain?Is 1000 observation enough for the fine-tuning? | 2020-07-20T15:04:28Z | [
{
"date": "2020-07-20T15:30:50Z",
"reply": "Hi@swayson, not an expert here but fine-tuning on your domain should give better results. I can’t comment on if 1000 examples will e enough or not, you’ll probably need to experiment.Also have look at this question generationmodels. You can try to create synthetic QA corpora using these models. Synthetic QA corpora has shown to improve results for QA."
},
{
"date": "2020-07-20T15:50:54Z",
"reply": "Thank you@valhalla; I am going to give the synthetic QA models a shot and see if I can get some improvements."
},
{
"date": "2020-07-20T16:06:43Z",
"reply": "Here’s a relevantpaper. See table 2 for Synthetic QA results."
}
] |
State of the art technique for initializing Embedding Matrix? | https://discuss.huggingface.co/t/state-of-the-art-technique-for-initializing-embedding-matrix/326 | 3 | 4,558 | What are your thoughts on the state-of-the-art technique for initializing Embedding Weight matrices? Currently, PyTorch uses normal distribution to initialize these. Does using Kaiming Init make more sense? | 2020-07-16T17:55:52Z | [
{
"date": "2020-07-16T18:40:21Z",
"reply": "From what I remember, Transformer modules should use Xavier init by default. I don’t remember the reasonwhy, though, nor whether Kaiming is a better choice."
},
{
"date": "2020-07-17T09:46:34Z",
"reply": "Transformer uses xavier.So using Kaiming init for Embedding matrix is preferred for RNN? In case of transformer Xavier is preferred? Am I correct to say this?"
},
{
"date": "2020-07-19T12:18:52Z",
"reply": "Based on init_weights of bert , bert normalize linear and embedding with mean 0 and 0.2 std.BTW, I tried to use kaiming (Pytorch default initialization) on Linear and embedding, on my toy task with 2 layer transformer. And it gives slightly better performance. I won’t say it is better than xavier surely. But it is definitely worth trying."
}
] |
Modern NLP for "Economics of Innovation" (Open Research Project using Patent Data) | https://discuss.huggingface.co/t/modern-nlp-for-economics-of-innovation-open-research-project-using-patent-data/235 | 4 | 740 | Hi all,Suraj and I started discussing a potential research project and he suggested I make a thread here to discuss. As a quick intro, I am an NLP hobbyist and consumer of NLP research, and Suraj is a software developer with a keen interest in NLP.From my perspective, here are a few goals of the project:Upgrade our NLP skills in generalMake an immediate contribution to an applied field by introducing modern NLP methodsDig deeper into NLP research and potentially make a minor advancementMy idea was to introduce the Innovation Studies (or Economics of Innovation) field to modern NLP methods. I suggested this for a few reasons. First, it is generally accepted that the long-run economic growth rate, and standard of living, is driven by innovation. And second, there are about 8 million published US patents - that are freely available - that we can use as data.I am open to any directions to take, but here are a few starting points:Patent ClassificationI can see two reasons for improving patent classifications. One is for Innovation researchers to use the improved patent classes for their research - rather than relying on officially listed patent classes. And two, would be for actual innovation policy. One consensus in the field is that basic research is drastically under-invested in, since companies do not directly benefit from the large spillovers of basic research. So the rate of return on basic research is much higher for society than for any single company. However, when governments try to encourage basic research through incentivizing these types of patents, inventors can try to “cheat the system” by re-labeling their patent. Economists Ufuk Akcigit and Stefanie Stantcheva [1] say “Going forward, finding a feasible way to differentiate between basic and applied research is essential to better innovation tax policies.”Estimating the “Impact” of a PatentAs far as I know, the vast majority of innovation studies, that use patent data, use the number of citations as a proxy for the impact of a patent. So improving the “impact score” of a patent might help many innovation researchers. Professor Bryan Kelly et al [2] use a very clever modification of TF-IDF to find similarity scores between patents. A patent’s impact is then estimated by finding the difference in similarity scores between the target patent and all previous patents, and the target patent and all future patents. This makes sense to me, and is well explained in their paper. However, I do think that using other methods of finding patent embeddings may be worth investigating - like using AllenAI’s SPECTER document embedding approach. I’d also like to look into deep graph networks to see if they can help produce an estimate of the impact of a patent, without using citations.Patent Idea GenerationI think it would be cool to generate a patent abstract (or idea) either unconditionally, or conditioned on a sentence that would guide the generation. There are lots of directions we could pursue with this.Anyway, sorry for the long post. Please let us know if you have ideas, suggestions, would like to participate, etc.[1]https://www.nber.org/chapters/c14428.pdf[2]https://www.nber.org/papers/w25266.pdf | 2020-07-12T15:40:52Z | [
{
"date": "2020-07-12T15:50:04Z",
"reply": "@VictorSanh,@joeddav,@yjernitewe would love to hear your thoughts on this"
},
{
"date": "2020-07-13T20:25:53Z",
"reply": "Fun idea – thanks for sharing! I think any of these directions would make for a fun and educational project. Some thoughts/questions on each:Do you have a good dataset with applied vs. basic annotations? If so, this should be pretty easy. If not, one direction would be to explore semi-supervised learning (Seb Ruder hasa good blog poston it).This seems more interesting than#1to me, but try to be careful with fairness and bias here. The model could easily learn to associate the race or gender of the the patent holders or the prestige of the organizations that they come from with the patent’s impact, since (I assume) these factors will correlate with citations. Removing bias completely won’t be possible, but it will add legitimacy to your project if you are careful & transparent about them. You wouldn’t want a scenario where companies use your tool to determine the value of their employee’s patents which would likely end up disproportionately rewarding men over women, for example.After a quick google I foundthis paper, so I’d use that as a starting point and see what you could do that would be fun or interesting on top of that."
},
{
"date": "2020-07-14T03:28:48Z",
"reply": "Thanks for the feedback!I believe there are a few small datasets that clearly label applied vs basic research, for patents. There are also the official patent classes, which could help inform classification - but they do not contain clear applied vs basic research distinctions. However, ideally, one would create a classifier which would distinguish between many hundreds of classes. This could allow policy makers to take advantage of the fact that within applied or basic research, some areas would have higher social returns, or relate to a specific mission, like climate change. And thanks for that link!My original plan was toonlyuse publication dates, and patent abstract and description text for estimating impact. This makes the task more challenging but I believe it would remove as much bias as possible. I appreciate the recommendation, I will keep that in mind.Edit: To clarify, as far as I understand, the typical approach to analyzing the patent/innovation space is to create a network of individual inventors, institutions, and patent IDs. Then linking these nodes via citations, authorship and affiliations.Whereas, I propose to ignore all of the above and only focus on the content of patents. This could help decrease the influence of the biases associated with citations, and increase the information associated with each patent. This latter point assumes that there is more information about a patent in the language embedding space than the citation network space. To me, its a fair assumption, but I have no evidence yetYup! That’s a cool paper - and I agree a great starting point.Again, thanks for the feedback. Once Suraj and I decide on a starting point we can update this thread"
},
{
"date": "2020-07-14T14:39:11Z",
"reply": "Hi@joeddavThanks for the feedback.This also seems important to me as well. Fairness will be utmost concern, no private info (race, gender) will be visible to the model. And I think the embeddings should also help in discoverability i.e finding out concepts/patents/papers which are similar to a particular paper .Generation is always fun so will definitely start from there"
}
] |
ACL 2020 - Some personal highlights - Victor | https://discuss.huggingface.co/t/acl-2020-some-personal-highlights-victor/202 | 4 | 1,346 | Hey!I had such a blast at ACL2020 this week! So many cool works, and lots of very interesting discussions both in the chat and in the zoom Q&A sessions!Here’s a pick of 3 of my highlights (there are extremely biased towards what I’m currently interested in):(1)Inherent Disagreements in Human Textual Inferencesby Ellie Pavlick, Tom KwiatkowskiNatural Language Inference (sometimes referred to as textual entailment) has become fundamental in evaluating language understanding and semantics. The central question of this paper is “what should we use as ground truth labels for textual inference?” The authors show that the apparent “annotation noise” often results from a multi-modality among the annotators’ labels. They discuss the implication of this uncertainty and argue for a refined evaluation that better captures the diversity of human judgments.(2)Unsupervised Domain Clusters in Pretrained Language Modelsby Roee Aharoni, Yoav GoldbergThe authors propose a “data-driven” approach to define what a domain is in NLP and to select in-domain data. They show that large pre-trained language models are able to capture these domains in an unsupervised way and leverage this insight to select in-domain data to train neural machine translation models.(3)Syntactic Data Augmentation Increases Robustness to Inference Heuristicsby Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, Tal LinzenNatural Language Inference models fine-tuned on top of models like BERT show high accuracy on standard test datasets but fail on challenge sets. The authors propose a simple syntactic data augmentation procedure to augment the standard training set to up to a thousand examples. Results show great improvement (and generalization) by just exposing the model to these controlled syntactic examples supporting that hypothesis that BERT contains knowledge that simply needs to be “activated”. Cases failures (like passive) support that the idea there is also knowledge pre-trained BERT is not aware of.How about you? Did any work change your perspective? | 2020-07-10T18:46:20Z | [
{
"date": "2020-07-11T23:39:11Z",
"reply": "Hi@VictorSanh, thanks so much for your list. As the conference is overwhelming with contents, I did not see these papers at all.In paper (3) , syntactic augmentation is very interesting since(a) Augmentation is very successful in Computer Vision (CV), but in NLP, augmentation is much more non-obvious (regarding how to do it) and maybe sensitive to downstream tasks (more robust in CV)(b) In the paper Section 3, author stated that the augmented examples are noisyWe did not attempt to ensure the naturalness ofthe generated examples; e.g., in the INVERSIONtransformation, The carriage made a lot of noisewas transformed into A lot of noise made the carriage. In addition, the labels of the augmentationdataset were somewhat noisy; e.g., we assumedthat INVERSION changed the correct label from entailment to neutral, but this is not necessarily thecase (if The buyer met the seller, it is likely thatThe seller met the buyer). As we show below, thisnoise did not hurt accuracy on MNLI.This is very interesting to me (in CV it’s often intuitively clear which augmentation is noiseless / noisy), so I assume that the ‘noisy-ratio’ is minimum since too much noise should degrade the overall performance …Further, in CV, we also have soft-labels augmentation like MixUp and CutMix, so maybe this similar area in NLP also has more potential.(on Kaggle we also tried our own (non-published) augmentation to NLP with this similar ideas –e.g. In the recent Jigsaw Toxic classification competition where a paragraph of comment texts are given as an example. We can combine two paragraphs together [with Toxic + Neutral = Toxic label Formula) , or dynamic random shuffling sentences within the given paragraph where toxicity degree should be invariant with this operation.)"
},
{
"date": "2020-07-13T15:18:51Z",
"reply": "That’s very interesting!I agree, automatic data augmentation is still something somehow mysterious to me in NLP since it is way less controllable than in vision. It seems fine to me that the resulting examples are extremely noisy (I saw some works in vision where perturbed images where the original label becomes quite ambiguous). There might be a balance to find: you want the model to learn through the noise but also not to be over-confident when you have ambiguous examples…Do you have guidelines you can share on the data augmentation in NLP? In which case it works? Why it works? Or a survey?"
},
{
"date": "2020-07-14T05:34:43Z",
"reply": "Hi Victor, I haven’t seen the guideline on NLP augmentation before.Just want to note two potential augmentation codes.As you may know, in vision, we have a lot of augmentation libraries, but one which really stands out isalbumentationsdue to its speed and variety. (De facto choices for all Kaggle competitors)Recently, there’s a creative guy who applied the basic Albumentations class to NLP task of Jigsaw’s multi-lingual toxic classification (of course with HuggingFace model) :https://www.kaggle.com/shonenkov/nlp-albumentationsI believe we can extend this class in the future.Another worth-mentioning isnlpaug(https://github.com/makcedward/nlpaug) where we can augment with simpler ideas like synonym / antonym word swapping via word suggestions from NLTK and BertBTW, do your team also attend ICML this week?"
},
{
"date": "2020-07-14T14:24:44Z",
"reply": "Interesting! Thanks for the pointer, I’ll definitely check this out!No, unfortunately, no one in the team is at ICML this week."
}
] |
ICLR 2020 highlights - Yacine | https://discuss.huggingface.co/t/iclr-2020-highlights-yacine/37 | 1 | 1,720 | I took some notes on some ICLR2020 papers that seemed most relevants to my research topics: information retrieval for QA, model architectures and analysis, and text generations. You can find them here!docs.google.comICLR papersTransformer architectures / pretraining losses Lite Transformer with Long-Short Range Attention Long Short Range Attention uses smaller dimension global attention in parallel with convolutions to capture local context. The approach is more... | 2020-07-07T17:44:03Z | [
{
"date": "2020-07-11T23:59:28Z",
"reply": "Thanks for the great summary Yacine@yjernite.It’s a pity that there’s no ICLR video presentation now on slidelive (maybe they deleted 1-2 weeks after the conference end ?) … Some of them can still be found on Youtube though"
}
] |
About the Research category | https://discuss.huggingface.co/t/about-the-research-category/26 | 2 | 421 | Use this category for any research question or to coordinate on a project with other users. | 2020-07-07T16:00:19Z | [
{
"date": "2020-07-11T23:09:45Z",
"reply": "Thanks so much to have this category! Love it."
}
] |
ACL 2020 highlights – Canwen | https://discuss.huggingface.co/t/acl-2020-highlights-canwen/183 | 1 | 900 | The original Twitter threadhere.The selecting criterion here is beinginteresting. Not an exhaustive list.Let Me Choose: From Verbal Context to Font Selectionaclweb.org2020.acl-main.762.pdf687.19 KBBridging text with its font! Very interesting application paper from Adobe. They even have emojis play a role in it! Even fonts have their semantics and sentiments.Contextualized Weak Supervision for Text Classificationaclweb.org2020.acl-main.30.pdf1819.53 KBThis paper cleverly introduces word disambiguation into weakly supervised text classification and the method for data augmentation is also great!Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words?aclweb.org2020.acl-main.419.pdf813.57 KBThis paper answered the interesting question that if machines read text just like us humans? Though the conclusion may not be surprising, it opens a new path to understand attention.Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encodersaclweb.org2020.acl-main.23.pdf912.50 KBSorry for self-promoting but this paper is actually very interesting. The flexible framework can be extended to many more fields including text style transfer, image generation, voice conversion, etc. | 2020-07-10T14:25:10Z | [
{
"date": "2020-07-10T17:01:08Z",
"reply": "I like the human attention maps one. It’s interesting that humans have much more peaked distributions, focusing in a few key words where as the ML system attends to a larger sweet of words with varying weights."
}
] |
ACL 2020 highlights - Yacine | https://discuss.huggingface.co/t/acl-2020-highlights-yacine/186 | 0 | 1,385 | These are some of the papers I discovered at this year’s ACL conference. I focused on three main themes:Model Analysis(Conditional) Text GenerationSociety & Ethics and NLPI tried to provide a short summary for each of the papers outlining the methods and contributions: please refer to the papers themselves for more details, they are all well worth the read!I was particularly impressed by the depth of thinking in a lot of the papers accepted to the Ethics & NLP track, and would love to have further conversations about them here!Link to the Google Docs versionModel AnalysisEvaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?aclweb.org2020.acl-main.491.pdf839.04 KBhttps://virtual.acl2020.org/paper_main.491.htmlThis work proposes an experimental setup based onasking humans to simulate model behaviourto evaluate how much insight various visualization and explainability methods actually give users. In the first experiments they proposed, users are asked to predict model outputs, thenshown explanations for these outputs provided by automated tools. They are then asked to predict outputs for a new set of examples, and the usefulness of the automatic explanation tools is measured byhow much their accuracy improves in this second stage. Another experiment shows user model outputs and explanations, and asks them to predict the model behavior on counterfactual examples where the input is perturbed in a targeted fashion. The authors show that the measured accuracy improvements givemore interpretable and reliable information about the quality of the explanation tool than subjective Likert-scale judgments. Replicating this study at a larger scale seems a promising way to evaluate explanation toolsBeyond Accuracy: Behavioral Testing of NLP Models with CheckListaclweb.org2020.acl-main.442.pdf391.44 KBhttps://virtual.acl2020.org/paper_main.442.htmlThis paper proposes a framework to develop checklists: suites of tests that can be applied to various NLP models to check for “bugs”. One significant difference between the proposed checklist approach and the benchmarks that have been guiding the progress of the field is that the former is more targeted: instead of reporting the average performance of a model across a large test set created through crawling or crowd-sourcing, it proposes tocome up with a set of simple unit tests corresponding to use cases we want to ensure our systems succeed at before they can be deployed and used. In order to make this process systematic and affordable, one important contribution of this work is a set of tools which allow practitioners to easily and efficiently design such testing suites by providing an intuitive UI and leveraging models to suggest likely test examples. Allowing people to easily develop, share and aggregate these test suites has thepotential to significantly increase user trust in NLP models.Conditional GenerationAsking and Answering Questions to Evaluate the Factual Consistency of Summariesaclweb.org2020.acl-main.450.pdf1184.49 KBhttps://virtual.acl2020.org/paper_main.450.htmlFEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarizationaclweb.org2020.acl-main.454.pdf448.59 KBhttps://virtual.acl2020.org/paper_main.454.htmlThese two concurrent papers take a similar approach to evaluating the actuality of generated abstractive summaries of news text, but are complementary in their implementation and analysis. The basic idea is that we cancheck whether a summary conveys information that is faithful to the source materialby checking that aquestion answering system will give similar answers when using either as supporting document. The questions are generated through a two-step process: first, use a heuristic to identify spans in the summary we want to check for accuracy, then, use an automatic question generation system to obtain questions whose answers should be those spans. If a machine reading comprehension system finds the same answer to the question when reading the article as when reading the summary, the information is probably correct. FEQA and QAGS differ in how they filter the candidate spans and how they compare agreement, but both find thatquestion based metrics correlate better with human judgments of factualitythan other metrics. One caveat however is that both methodswork better on CNN/DM than on Xsum, which is more abstractive in nature. Finally, QAGS note that in addition to being used as an aggregated automatic metric, these methods can beuseful for visualizing specific examplesin human-in-the-loop settings.On Faithfulness and Factuality in Abstractive Summarizationaclweb.org2020.acl-main.173.pdf379.69 KBhttps://virtual.acl2020.org/paper_main.173.htmlThis paper further investigates thestate of the art for the factuality/faithfulness of abstractive summarizationby providing alarge-scale human evaluation of the hallucinations produced by recently published systems. This work classifies the hallucinations into an intrinsic (model misunderstands the input) and extrinsic (model invents completely new facts) category. Note that in this setting, factual information is still considered to be a hallucination if it’s not in the input. The paper focuses on Xsum (one sentence summaries, abstractive in nature), andprovides annotations for the output of models published up to 2019. As a result, large-scale pre-trained seq2seq models (T5, BART) are missing. Can use NLI for summary selection to improve faithfulness at the cost of ROUGE. The annotations are available at::https://github.com/google-research-datasets/xsum_hallucination_annotationsExploring Content Selection in Summarization of Novel Chaptersaclweb.org2020.acl-main.453.pdf335.81 KBhttps://virtual.acl2020.org/paper_main.453.htmlThe authors take some step towards training a book chapter summarization model: namely, theygather summaries of public domain book chapters from study guide websites, use these to align book sentences to summary sentences using IDF-weighted ROUGE (which seems to work better than plain ROUGE, METEOR, or BERT - would be interesting to see BLEURT/BertScore results), and train an RNN-based extractive summarization system using these noisy labels. The authors still have to release their pre-processed data and (hopefully) noisy labels, but this is anice foray into long-input summarization outside of the news/scientific article domain.Dataset InformationAbout 8000 chapters from Gutenberg project books with 2-5 summaries per chapter gathered from study guide websites (licensing!). Chapters are ~5,200 words, summaries are ~370 wordsScript to re-construct dataset athttps://github.com/manestay/novel-chapter-datasetLeveraging Pre-trained Checkpoints for Sequence Generation Taskshttps://www.mitpressjournals.org/doi/pdf/10.1162/tacl_a_00313https://virtual.acl2020.org/paper_tacl.1849.htmlThe paper explores how we canuse pre-trained encoder-only and decoder-only models to warm-start an encoder-decoder model. While their methods still lags behind full encoder-decoder pre-trained models (R1 on Xsum for their method: 41.45 vs BART: 45.14), they show some improvements over the baselines byinitializing encoder and decoder weights with Roberta checkpoints, and randomly initializing cross-attention. The model can even be made more memory-efficient by sharing encoder and decoder weights.Improved Natural Language Generation via Loss Truncationaclweb.org2020.acl-main.66.pdf513.42 KBhttps://virtual.acl2020.org/paper_main.66.htmlNot specific to conditional generation. The authors argue that log-likelihood as a loss is not robust to noise since it needs to haveprobability mass on every seen example (including outliers). Instead, our primary aim should be to ensure that generations from the model are indistinguishable from natural language data. The authors show that atruncated log-likelihood loss can serve as an upper bound for a measure of distinguishability. Generations from the full output distribution of a model trained with truncated loss are rated better than top-k or top-p sampling for a model trained with the full loss when evaluated with HUSE.Society & Ethics and NLPSocial Biases in NLP Models as Barriers for Persons with Disabilitiesaclweb.org2020.acl-main.487.pdf980.68 KBhttps://virtual.acl2020.org/paper_main.487.htmlThe authors consider the effect of the mention of disability on sentiment and toxicity classifiers, and the subsequent impact on the life and discourse of people with disabilities. They show that commonly used classifiers consistently associate higher toxicity score and more negative sentiment score, which would among other thingsexpose people to a heavier burden of content moderation false positives when talking about their own disability. The authors trace these biases in part to BERT model behavior and to dynamics of the training data. The authors also discuss thenecessity of involving the affected communities in work about ableism in ML and NLP,and describe which resources from advocacy groups they relied on for their experimental design.Social Bias Frames: Reasoning about Social and Power Implications of Languageaclweb.org2020.acl-main.486.pdf674.00 KBhttps://virtual.acl2020.org/paper_main.486.htmlThe authors propose anew annotation scheme for offensive language which goes beyond binary hate speech classificationand focuses on the intent of the utterance: the annotators are asked to identify the target group, whether the utterance is an instance of in-group speech, and toexplicitly write out the offensive implication. The authors created a fairly large dataset of 45k posts from a variety of sources using these guidelines and fine-tuned a GPT2 model to predict the frames. The model has some initial success but still leaves room for improvement, especially to generate better explanations.Dataset Informationhttps://homes.cs.washington.edu/~msap/social-bias-frames/The dataset consists of 45k utterances collected from Twitter, Reddit, as well as known hate sites. 42% are classified as offensive, only about 5% have the in-group annotation. The total data is made up of 150k training pairs since several posts target multiple groups.The paper provides asection on ethical considerationsof making and using the dataset anddescribes the demographic makeup of the annotators.Language (Technology) is Power: A Critical Survey of “Bias” in NLPaclweb.org2020.acl-main.485.pdf1406.04 KBhttps://virtual.acl2020.org/paper_main.485.htmlThe authors start by reviewing a large number of papers on bias in NLP systems, and find that there is a common lack of rigorous definition or motivation of the problem they aim to address, inconsistencies in the way bias is defined across the field, and general lack of engagement with relevant sociolinguistic work. As a result, the authors propose a set of recommendations for future work which include: grounding work on in therelevant literature outside of NLP that explores the relationships between language and social hierarchies, explicitly statingwhy the system behaviors described are harmful, in what ways, and to whom,and examining language use in practice byengaging with the lived experiences of members of communities affected by NLP systems. To illustrate how these recommendations can be interpreted in practice, the authors present a case study of African American English. The whole paper is packed with citations to relevant recent work that make up a necessary reading list for NLP practitioners aiming to think more deeply about the societal impact of their work.Give Me Convenience and Give Her Death: Who Should Decide What Uses of NLP are Appropriate, and on What Basis?aclweb.org2020.acl-main.261.pdf164.96 KBhttps://virtual.acl2020.org/paper_main.261.htmlThe authors analyse an EMNLP 2019 paper on automatic legal sentencing as acase study for learning how to work toward an ethical assessment of works in the field. Specifically, the work relies on previously published recommendations fordata statements(Bender and Friedman, 2018)anddataset sheets(Gebru et al., 2018)to ask and answer a number of fundamental questions about the creation and use of the dataset. The paper then describes the concept of dual use, encouraging dataset and algorithm creators to consider whether alternative uses of their work may have nefarious effects. Overall, this paper can be a good introduction to the above cited works specifically and ethical considerations about work in NLP more broadly. | 2020-07-10T15:55:38Z | [] |
Paper Discussion: Weight Poisoning Attacks on Pre-trained Models | https://discuss.huggingface.co/t/paper-discussion-weight-poisoning-attacks-on-pre-trained-models/129 | 0 | 1,006 | Copied over from GitHub discussions. See the original discussionhere.Hi everyone, for thisScience Tuesday I wrote up a quick discussion on a great paper from Kurita et al.'s on how pre-trained models can be “poisoned” to exhibit nefarious behavior that persist even after fine-tuning on downstream tasks. Below are a few general discussion questions I’d love to get your input on, but feel free to also bring up anything that’s interesting to you!Paper:Weight Poisoning Attacks on Pre-trained ModelsAuthors: Keita Kurita,Paul Michel,Graham NeubigPresenter:Joe DavisonPresentation:Colab notebook/postDiscussion QuestionsThe authors give a brute-force method for identifying trigger words by simply evaluating the LFR (label flip rate) for every word in a corpus. Words with very high LFRs can then be inspect to see if they make sense, or if they might be engineered triggers. Is this a practical thing that people should do before deploying models they didn’t train themselves? Is there another way that words with anamolous effects on a model could be identified? How else could poisoned weights be identified?Is it safe for companies with features like spam and toxicity detection to use pre-trained models from the community in deployed applications?When does it make sense for an attacker to try to disseminate a poisoned model and when is it smarter to attack an existing model by creating adversarial examples?Do you buy the author’s explanation of why the method doesn’t do as well on spam classification? If not, why do you think it is?The authors say that ignoring second-order information in “preliminary experiments” did not degrade performance (end of section 3.1). For the people who are better at math than me, do you buy this? Should they have tried to do some Hessian approximation to more extensively test whether first order information is sufficient? | 2020-07-08T20:19:26Z | [] |