title
stringlengths 15
185
| link
stringlengths 53
219
| replies
int64 0
43
| views
int64 11
18.5k
| initial_post
stringlengths 4
20.5k
| initial_post_date
stringlengths 20
20
| responses
listlengths 0
20
|
---|---|---|---|---|---|---|
Privacy enhancing technologies in model development | https://discuss.huggingface.co/t/privacy-enhancing-technologies-in-model-development/26521 | 0 | 480 | Dear Community!I am a PhD researcher at the London School of Economics and am exploring the field of privacy enhancing technology (PET) usage in ML model development (e.g., approaches like differential privacy) and their societal and organisational implications. However, I am struggling with finding data sources that allow me to analyse the diffusion of PETs (in the best case across companies or locations). One thought was that model cards on Huggingface would capture the usage of PETs - is that the case? do you have other data sources in mind I that indicate PET usage?Thanks so much for your support! | 2022-11-22T16:09:41Z | [] |
Conversational QA pretrained model? | https://discuss.huggingface.co/t/conversational-qa-pretrained-model/26441 | 0 | 683 | I was wondering if we’ve a pretrained model for Conversational QA. We’ve conversational AI, we’ve QA which needs context, do we’ve anything which when given a context, replies technical questions but also acts as a chat botAny help is appreciated | 2022-11-21T10:52:30Z | [] |
Composition Training/Validation Split of AutoTrain | https://discuss.huggingface.co/t/composition-training-validation-split-of-autotrain/26328 | 0 | 898 | Hey everyone,is there any documentation about how AutoTrain splits your data into training and validation data? I selected the option that it should automatically do so. I conducted binary text classification with BERT.It would be great to know in order to report a percentage split in a research project.Thanks!Bestrob | 2022-11-18T19:04:20Z | [] |
Do the common tricks in transformers help with RNNs? | https://discuss.huggingface.co/t/do-the-common-tricks-in-transformers-help-with-rnns/25879 | 0 | 481 | Does anybody know any research or work that utilizes common tricks (layer norm, masked language training, etc) commonly used with transformers with RNNs?Do these things still help improve RNNs? If not, are there reasons you think these techniques would/would not translate to rnns? | 2022-11-10T17:48:33Z | [] |
Metadata of NLP datasets | https://discuss.huggingface.co/t/metadata-of-nlp-datasets/25603 | 0 | 584 | Hi,I’m new to the NLP domain and HuggingFace ecosystem.I wanted to some suggestions on where to read about the meta data of datasets used for NLP.I have worked mostly with vision data so far and simple meta features shared by image datasets in general were:image resolutionNo. of training samplesNo. of classification labelsNo. of channelsWould the text data used in NLP tasks have some such features in common? Aside Number of training samples and number of classification labels. Any thoughts are welcome.Thanks! | 2022-11-05T19:51:51Z | [] |
I'd like to understand on how to train a neural net with agents and evolution | https://discuss.huggingface.co/t/id-like-to-understand-on-how-to-train-a-neural-net-with-agents-and-evolution/25334 | 0 | 533 | I’d like to understand on how to train a neural net with agents and evolution.It might be easier to think of a game world though i don’t create games.The training will be inside jupyter data.Say I got 10 inputs, the the agent has some value (reward store).The 10 values are unknown and to be interpreted by a NNIt needs to improve its reward though its output is just 3 options, like left/forward/rightNot every move results in a reward so training likely takes time period.Depending on their reactions agents might be in a different scenario, though at some time one selects the best (n) agents. (high reward)And then trains again until a supergood agent is able to interpren the input valuesHow does one create train such a network?,Normally in neural networks one trains a network toward a certain goal, using back prop, ea several inputs to resolve something alike DNN or a LSTM. But the rules here are so different.Anyone knows of some jupyter sample for a DNN training alike that ?. | 2022-11-01T14:04:11Z | [] |
How to annotate these type of data for custom tr-ocr training | https://discuss.huggingface.co/t/how-to-annotate-these-type-of-data-for-custom-tr-ocr-training/25227 | 0 | 499 | Help | 2022-10-30T14:50:15Z | [] |
Online/streaming speech recognition | https://discuss.huggingface.co/t/online-streaming-speech-recognition/4456 | 2 | 2,923 | Are there plans to implement online decoding for the speech recognition models such as wav2vec2 and XLSR? More specifically, to be able to receive audio in short chunks, and output partial transcripts as they become available.MotivationMany use cases are covered by the current wav2vec2 model in the library, involving batch recognition of pre-recorded text. However for an online application that wanted to continuously recognize speech on a live input stream, this may not be sufficient. | 2021-03-17T00:22:36Z | [
{
"date": "2021-09-11T18:50:39Z",
"reply": "I would very much like to know whether this is possible too! Have you gotten any further on this,@arkadyark?"
},
{
"date": "2022-10-26T08:19:28Z",
"reply": "please check this oneUse wav2vec2 models with a microphone easilyBeginnersHello folks, \nI wrote a little lib to be able to use any wav2vec2 model from the model hub with a microphone. Since wav2vec2 does not support streaming mode, I used voice activity detection to create audio chunks that I can feed into the model. \nHere is a little example, you canfind the code on github. \nfrom live_asr import LiveWav2Vec2\n\ngerman_model = \"maxidl/wav2vec2-large-xlsr-german\"\nasr = LiveWav2Vec2(german_model,device_name=\"default\")\nasr.start()\n\ntry: \n while True:\n tex…"
}
] |
Exploring contexts of occurrence of particular words in large datasets | https://discuss.huggingface.co/t/exploring-contexts-of-occurrence-of-particular-words-in-large-datasets/22119 | 2 | 793 | Hi everybody, how are you?. I am currently working on a project where we would like to explore and be able to obtain the contexts of occurrence of particular words or n-grams in large datasets used to train language models, such asGitHub - josecannete/spanish-corpora: Unannotated Spanish 3 Billion Words Corpora.As you can imagine, the problem is that when dealing with such large datasets, conventional strategies like using libraries like pandas and the like require a lot of RAM and computing power, so here are my questions:Does the platform have any tools already available to carry out different types of searches on large datasets, which facilitates this task?Is there some kind of server/service within the platform with enough RAM and computing power that we can access to load the full datasets and use an API to interact from our Space?Thank you very much! Hernán | 2022-08-25T23:55:27Z | [
{
"date": "2022-08-29T22:10:11Z",
"reply": "@nanomI’ll try and take a shot at providing some assistance. I am still a beginner at the huggingface suite but I’ve been using various aspects of it recently.Does the platform have any tools already available to carry out different types of searches on large datasets, which facilitates this task?Perhaps one thing to consider is thedatasetslibrary (here). From what I gather, it utilizes Apache Arrow under the hood to efficiently build a memory map of the data for efficient loading and processing. Withindatasetsthere is amap()function that I have used extensively with great success. If your dataset is some what customized, it might be worthwhile to build a loading script for thedatasetsobject and then runmap()over the data to perform your searches/calculations. I have done both of these recently and am happy to help and share my experience if you think it will benefit you.Is there some kind of server/service within the platform with enough RAM and computing power that we can access to load the full datasets and use an API to interact from our Space?This one I’m not super certain of. If I read your question correctly, you’re asking about the possibly to load some data, model, and training routine onto a set of compute hardware on hugginface’s end that has a lot of RAM (and possibly GPUs) available to run the training pipeline. If this is the case, then perhaps thehardware solutionand/or theHF servicesmight be of interest."
},
{
"date": "2022-10-19T17:27:37Z",
"reply": "Thank you very much for your response!@nanommanaged to implement an inverted index to address the first problem but we are still struggling with hardware limitations. Do you know who we should contact to ask some questions about which is the best pricing option for a particular project regarding hardware?HF services"
}
] |
Explaining medical diagnosis | https://discuss.huggingface.co/t/explaining-medical-diagnosis/24664 | 0 | 511 | Hi there,Have any of you came across a model/set of models for explaining clinical diagnosis.I think it may be similar to text summarisation, but with few additions:it needs to use different vocab (for patients),some parts may not be necessary or even shouldn’t be explained (disturbing content that a real doctor should explain directly)Best,Michal | 2022-10-19T11:59:29Z | [] |
Attention mask and token ids | https://discuss.huggingface.co/t/attention-mask-and-token-ids/15243 | 1 | 2,168 | HI,I am taking following wonderful course,TransformesWhile we do padding we pad the sequece with 0 and ask model not to consider the padding. I was wondering if there is some token with id = 0? Because in this case we will be avoiding a token with id = 0, which is not good. Could anybody please help me here.Thank you very much. | 2022-03-01T23:23:57Z | [
{
"date": "2022-10-18T14:37:46Z",
"reply": "First, you’re right, we wouldn’t want to avoid real input.That’s why we use a padding token.There are different special tokens, such as the padding token, begin of sentence (BOS) token, end of sentence (EOS), unknown (unk) and more.Eventually, since we’re working with vectors of numbers (tensors) every token has a token id corresponding to the token. Meaning, the special tokens are also embedded as numbers.Usually the padding id correspond to 0, so when you pad with 0, you actually use the padding token, which is great"
}
] |
BERT from scratch without self-supervised learning | https://discuss.huggingface.co/t/bert-from-scratch-without-self-supervised-learning/24397 | 0 | 594 | Suppose one copies or creates the bert-base architecture, meaning the model layers themselves and not the training curriculum (MLM and NSP). Next suppose that one adds on a classifier head to the copied bert-base architecture thatconsists of a single linear layer to make predictions over the set of classes associated with a dataset. One then randomizes the model’s parameters and begins training this model on a labeled dataset using supervised learning only.Namely, with the preprocessed data (that includes positional embeddings), this data is passed all the way through the bert-base architecture and the linear classifier layer to produce prediction set over the class set,a loss is calculated, and the weights are updated via backpropagation and stochastic gradient descent.My question is, would this be a good idea? Is there anything about this approach (compared to the self-supervised followed by task specific training curriculum) that would prevent one from obtaining decent metrics on a test set?As I understand, the bert authors had a lot of unlabelled data but suppose one had an equivalent amount of labelled data for a particular domain (let’s say sentiment about movie reviews). Is there any reason why the above approach wouldproduce poor results? | 2022-10-13T17:12:22Z | [] |
Cross Lingual Transfer Learning ( XNLI ) | https://discuss.huggingface.co/t/cross-lingual-transfer-learning-xnli/24383 | 0 | 796 | I was reading this paper on XNLI.And I wanted to understand what does TRANSLATE-TRAIN and TRANSLATE-TEST entail.I will write down what I understood.TRANSLATE-TRAIN: In this, we train N models. N stands for 15 languages. So we train 15 separate models for each language. How do we test this model? Should we run each of these 15 models per language and jot down the average accuracy under each language? For eg: We train 15 language models, then we test each of these 15 models on the English test set and then calculate the average accuracy. Does this sound right?I have been struggling with this baseline for so long.https://arxiv.org/abs/1911.02116image1671×994 262 KB | 2022-10-13T11:58:02Z | [] |
XLSR-Wav2Vec2 with punctuation | https://discuss.huggingface.co/t/xlsr-wav2vec2-with-punctuation/5775 | 1 | 1,331 | Hi,I’ve been trying to train XLSR-Wav2Vec2 to predict transcription + “relevant” punctuation (typically we don’t keep the punctuation).The idea was to get punctuation in an end-to-end manner as the audio sample gives us additional hints to differentiate between statements, questions and exclamations vs doing an additional post-processing.The goal is to be able to speak without saying “period”, “question mark”, etc… which is unnatural.Here are my main steps:I started from the transformers examplerun_common_voiceI use the CommonVoice English dataset as it’s easier to preprocess than other languagesI useunidecodeto preprocess the text which does a lot of smart changes → Málaga becomes Malaga, François becomes Francois, etcmy regex of chars to remove is"()[\]_+/=%|` (was tricky to create, the order here matters)I have a dict of resamplers (since they’re not all 16,000)I filter by durationNot sure if the wer metric should be adapted. Maybe I should add a separator between the punctuation but based on the way it’s calculated, I feel like it should decrease regardless.So far my training loss reduces (when using the full dataset it gets to nan probably due to some corrupted examples) but I keep a wer of 1. When testing a long run, I just get an empty output.To reproduce:clonethis repopython run_common_voice.py --dataset_config_name en --output_dir ./model --overwrite_output_dir --model_name_or_path facebook/wav2vec2-large-xlsr-53 --num_train_epochs 3 --per_device_train_batch_size 16 --evaluation_strategy epoch --fp16 --freeze_feature_extractor --group_by_length --gradient_checkpointing --do_train --do_eval --save_total_limit 1 --logging_steps 100 --warmup_steps 500 --load_best_model_at_end --metric_for_best_model wer --greater_is_better False --gradient_accumulation 2 --activation_dropout 0.055 --attention_dropout 0.094 --feat_proj_dropout 0.04 --hidden_dropout 0.047 --layerdrop 0.041 --learning_rate 0.000234 --mask_time_prob 0.082 --per_device_eval_batch_size 8Feel free to give any suggestions. I’ll update if I get more interesting results. | 2021-04-26T14:07:49Z | [
{
"date": "2022-10-12T17:24:42Z",
"reply": "Hi, how did you preprocess the punctuation?"
}
] |
How to train relation extraction? | https://discuss.huggingface.co/t/how-to-train-relation-extraction/24280 | 0 | 1,261 | I a little bit confused, for example I want to fine tune a NER in bert english and I realize inJohn Snow Labhave a NLP task for relation extraction, My question how we can the train the relation after fine tune the NER? can we do it in hugginface and what transformer model use in relation extraction? | 2022-10-11T15:33:14Z | [] |
Problem in understanding the test phase in Few-shot learning | https://discuss.huggingface.co/t/problem-in-understanding-the-test-phase-in-few-shot-learning/24231 | 0 | 575 | I am studying few-shot learning ([1703.05175] Prototypical Networks for Few-shot Learning) and its source code. But the point is that the query set and the support sets in the test phase does not make sense to me. I am trying to understand why the test phase or validation phase uses label data for the query set. It is very different from classification or semi-supervised learning.When we train the encoder, we use l2-distance, a support set, and a query set to train the network. The samples in both sets are chosen based on their labels (k-way+n-shot setting). In the test phase, we have different labels from the train set. We choose the support and query sets the same way as the training phase without updating the encoder weights. But the query set is chosen based on label.I do not understand why in the test phase we use the query set and try to predict the label based on distance from just x-way. Should not we calculate the prototypes of each label and calculate the distance of query images with all labels in the test phase? It makes more sense than calculating the distance of query samples (without considering them based on their labels). Also, the same in the training phase. no distance among all labels’ prototypes.Again, testing based on support and query set does not make sense in the test phase. | 2022-10-10T15:20:05Z | [] |
`nan` training loss but eval loss does improve over time | https://discuss.huggingface.co/t/nan-training-loss-but-eval-loss-does-improve-over-time/4521 | 5 | 3,796 | I’ve been playing around with the XLSR-53 fine-tuning functionality but I keep gettingnantraining loss.Audio files I’m using are:Down-sampled to 16kHzSet to one channel onlyVary in length between 4 to 10sI’ve set the following hyper-params:attention_dropout=0.1hidden_dropout=0.1feat_proj_dropout=0.0mask_time_prob=0.05layerdrop=0.1learning rate:on a warmup schedule to3e-4for 3 epochsat5e-4for 3 epochsback to3e-4Sadly, I’m fine-tuning the model on an unpublished corpus, so I am probably not at liberty to upload it here which might hinder reproducibility efforts greatly.Here’s what the loss and WER progression looks like:image497×815 75.2 KBAnyone know what could be happening here? The model seems to be training just fine and some testing proves that the model performs well on the language I’m training it on. So what’s up with the training loss?Pinging@patrickvonplatenand@valhallaas this might be relevant to them. | 2021-03-17T19:53:43Z | [
{
"date": "2021-03-18T06:59:14Z",
"reply": "Hey@jjdv,I’m sorry without a google colab it will be difficult to debug this for us. Given that your WER seems to decrease nicely - there might just be a problem at displaying the values…let’s see whether other people encounter the same problem"
},
{
"date": "2021-03-18T16:41:32Z",
"reply": "hey@patrickvonplaten!I forgot to attach the notebook to my post. (I’m not fine-tuning on colab so feel free to just import the notebook there).Again, not sure how useful it would be since the data isn’t available publicly (yet!)Here’s the notebook!"
},
{
"date": "2021-03-21T21:04:36Z",
"reply": "I looked a bit into it and the problem is the following:If one loss becomesnanorinfall the following displayed losses also becomenanorinfsince the shown loss is the average of all losses seen so far, see:transformers/trainer.py at 82b8d8c7b02562695f88be81cf0993972e324874 · huggingface/transformers · GitHubHowever this doesn’t mean that the losses afternanis displayed are actually useless → the model can very well train. So it’s more of a display error than an actual error often times. All in all my best suggestion here is to just take a look at the validation loss and if it goes down smoothly continue training"
},
{
"date": "2021-03-23T19:43:03Z",
"reply": "Someone suggested adding this parameter in hopes of getting rid of this problem:ctc_zero_infinity=TrueLoss is gonna be gigantic and it does hold that every time I faced this issue, the first training loss wasInfso this is probably a good fix for the issue!"
},
{
"date": "2022-10-10T10:43:56Z",
"reply": "i have same problem but also i have eval_wer is 1.0, at the beginning of training eval_wer is 0.6 and 0.5 and after 19 ephocs the eval_wer is 1.0 and still 1.0 in ephoc 33"
}
] |
LayoutLM for extraction of information from tables | https://discuss.huggingface.co/t/layoutlm-for-extraction-of-information-from-tables/7464 | 1 | 1,421 | Can the LayoutLM model be used or tuned for table detection and extraction?The paper says that it works on forms, receipts and for document classification tasks. | 2021-06-27T15:43:27Z | [
{
"date": "2022-09-29T07:23:58Z",
"reply": "Hi@ujjayants, were you able to find the answer. I too have the same question in mind. Just want to know your findings.Thanks"
}
] |
Is there a way to split a news article into subtopic | https://discuss.huggingface.co/t/is-there-a-way-to-split-a-news-article-into-subtopic/23436 | 4 | 1,185 | Hello, is there a way I can perform text segmentation on news articles?For example, a news article usually contains the main topic, but when reading through, there might probably be some subtopics present in the article. Is there a way I can divide those articles into those subsections/subtopics so that a news article can contain 2,3 or more sections depending on the subtopics discussed in that particular article.In case you are curious about what I need this for, I’m performing summarization on news articles, so instead of summarizing or parsing the whole article into the model at once, I want to divide them into sections based on what is discussed in the article and then summarize each section. Basically I’m trying to imitate what is done atsummari.comI will appreciate it if someone has done something like this before, or if anybody knows a way I can work through it. | 2022-09-21T10:53:57Z | [
{
"date": "2022-09-21T15:55:51Z",
"reply": "I’d recommend looking intoBERTopic"
},
{
"date": "2022-09-22T11:04:33Z",
"reply": "Thanks for your response, I checked it out and it is not addressing what I’m trying to do.BertTopic is kind of grouping multiple articles into various topics based on how frequently some words appear there.But what I’m trying to do is that given a single article I want to be able to divide that article into sections/subtopics if any is present."
},
{
"date": "2022-09-22T13:04:12Z",
"reply": "You could break the article into paragraphs and run it through BERTopic"
},
{
"date": "2022-09-22T14:23:00Z",
"reply": "Wow, I will try this out. Thank you."
}
] |
Best practices for estimating FLOPs-per-token with real datasets? | https://discuss.huggingface.co/t/best-practices-for-estimating-flops-per-token-with-real-datasets/23394 | 1 | 1,588 | Hi folks,I’m currently reading the T-Few paper on few-shot learning and in section 4.2 they provide a table and estimate of the 11B parameter model’s inference costs as follows:We summarize the costs in table 1 and discuss them below. For all estimates, we use the median number of shots (41) across the datasets we consider. Rank evaluation and our unlikelihood loss both require processing every possible output choice to attain a prediction for an unlabeled example.The median combined tokenized sequence length for the input and all possible targets is 103 for the datasets we consider.…Processing a single input and all target choices with T-Few requires 11e9×103 = 1.1e12 FLOPs, whereas few-shot ICL with GPT-3 175B requires 2×175e9×(41 × 98 + 103) = 1.4e15 FLOPs – more than 3 orders of magnitude more.My question is: why is themedianinput sequence length used for the FLOPs estimate instead of themean?I understand that a dataset can have outliers in length, but I’m curious whether using the median is common practice.Thanks! | 2022-09-20T12:40:53Z | [
{
"date": "2022-09-20T13:44:17Z",
"reply": "From Colin Raffel internally:Yeah, the mean can be a bit weird for sequence length since it’s a heavy-tailed distribution with lots of outliers (not normally distributed). I think in this case the median and mean were similar and we just used the median since it’s an int."
}
] |
Resources on interpretability of wav2vec-style speech models | https://discuss.huggingface.co/t/resources-on-interpretability-of-wav2vec-style-speech-models/23050 | 0 | 621 | Hello everyoneBig thanks to HuggingFace for creating this amazing framework, and the active community as well! I’ve been using huggingface for a while now and been reading this forum as well.I am working on multi-lingual speech models and am interested in understanding how the pre-trained wav2vec-style models represent input utterances (from a phonetics perspective if possible). For example, I would like to know how Language Identification Model like “VoxLingua107 Wav2Vec Spoken Language Identification Model” goes about representing a collection of short utterances in English vs. say Thai.The most straight-forward method I know is to take final layer output embeddings (in inference mode) and to use t-SNE to cluster. But this doesn’t seem to help as much.I am looking for literature, codes, frameworks (like Captum) and tutorials which use wav2vec-style models and focus on interpretability. Please help. Thank you | 2022-09-12T23:53:24Z | [] |
Keypoint Detection Accuracy is Very Low | https://discuss.huggingface.co/t/keypoint-detection-accuracy-is-very-low/22483 | 0 | 856 | Unfortunately, I cannot say too much about my data set but I am trying to predict hundreds of keypoints/landmarks on a given image/video feed.I’m having great difficulty with my model architecture, I have not been able to get an accuracy greater than 20%.My model is based on similar ones I found via GitHub and a few academic papers, however they were all predicting dozens of points vs my hundreds. I only saw two model architectures across these sources:model = tf.keras.models.Sequential([
layers.Conv2D(32, (3,3), padding='same', input_shape=(512,512,1)),
layers.LeakyReLU(),
layers.MaxPool2D((2,2)),
layers.Conv2D(64, (3,3), padding='same'),
layers.LeakyReLU(),
layers.MaxPool2D((2,2)),
layers.Flatten(),
layers.BatchNormalization(),
layers.Dense(128),
layers.ReLU(),
layers.Dropout(0.5),
layers.Dense(64),
layers.ReLU(),
layers.Dropout(0.5),
layers.Dense(501)
])model = tf.keras.models.Sequential([
layers.Conv2D(32, (5,5), input_shape=(512,512,1), strides=1),
layers.Conv2D(32, (3,3), strides=1),
layers.MaxPool2D((2,2), padding="valid"),
layers.BatchNormalization(),
layers.Dropout(0.2),
layers.Conv2D(64, (5,5), strides=2),
layers.Conv2D(64, (5,5), strides=2),
layers.AveragePooling2D((2,2), padding="valid"),
layers.Flatten(),
layers.Dense(128),
layers.ReLU(),
layers.Dropout(0.5),
layers.Dense(501),
layers.Softmax()
])My dataset is quite large, roughly 15K; this includes augmented data. Just looking for feedback.I’ve put this in the research section because I noticed that there’s very few models out there for keypoint detection; object detection seems to be much more popular. | 2022-09-03T12:16:41Z | [] |
Abstractive summarization ensemble | https://discuss.huggingface.co/t/abstractive-summarization-ensemble/19987 | 1 | 941 | Hi! I was wondering if anyone could point me to papers, blog posts, etc that explain how to ensemble previously trained models for abstractive text summarization (if possible).Moreover, is anything like this already implemented in Huggingface? | 2022-07-04T21:13:13Z | [
{
"date": "2022-08-31T12:26:18Z",
"reply": "What do you mean exactly by “ensembling” models?There are some models available on HuggingFace, for example the BART summarization model fine-tuned on the CNN/DailyMail dataset. You can take a look at the model card and how to use ithere. The implementation is very straightforward."
}
] |
Using OPTForSequenceClassification | https://discuss.huggingface.co/t/using-optforsequenceclassification/22340 | 0 | 677 | Hi.I’m getting an error when trying to import the OPTForSequenceClassification class:ImportError: cannot import name ‘OPTForSequenceClassification’ from ‘transformers’ (/opt/conda/lib/python3.7/site-packages/transformers/init.py)Any heads up on why this might be the case? I saw the huggingface github already added this class. | 2022-08-31T09:54:05Z | [] |
Zero shot classification for automated electrocardiogram reports | https://discuss.huggingface.co/t/zero-shot-classification-for-automated-electrocardiogram-reports/21594 | 3 | 1,146 | HiI am a person from healthcare and new to this forum, I am doing research related to classifying automated electrocardiogram reports(ECG) with pre-defined labels, After reading about zero shot classification,I feel this can support my research,And i really want to know what type transformers will support for this work. And also the steps to be taken, Thank you | 2022-08-13T17:53:25Z | [
{
"date": "2022-08-26T14:55:48Z",
"reply": "Hello,While I do not have a specific answer to your question, I would like to make a couple of observations that might help in getting you more replies.Your question is a little bit too open-ended for the forum perhaps, but regardless, it would greatly help other readers if you could specify:Why you feel zero shot learning could support your research (if you have access to labels as you seem to say, you may not need zero-shot learning but you could do some actual training!)What exactly your ECG data look like. Are they pictures, numbers in a sequence, Excel tables, free text? This makes a huge difference to model choice and right now your question is really not specific enough to be able to help you. Most HuggingFace models are meant to be used with sequential data (such as free text), although there are exceptions.What output you would expect from the model. Is this a binary output (such as “healthy”/“unhealthy”), a multi-class output, or a fully fledged worded report written by a machine? This will also influence your model choice quite a lot.A more precise indication of what you aim to get out of this question. Regarding the “steps to be taken” (and not knowing your level of knowledge in machine learning) this request could require a full book / course to be written, or maybe just a few high level bullet points (however these are unlikely to help if you don’t have coding experience and prior machine learning understanding). Again I’m having to make too many assumptions, which is why probably this question hasn’t received many replies, together with being too broad / vague. Adding the info I mentioned in the bullet points above might help others understand your use case better.Hope this helps."
},
{
"date": "2022-08-26T17:41:24Z",
"reply": "HiThank you for your time and replyMy ECG data is a free text like belowimage943×563 80 KBI want to classify the free text into 4 defined labelsSince i have few training data with pre-defined labels, i thought of using zero shot classification or few shot classificationThe output is to say Eg : Normal ECG , Abnormal ECG, Myocardial infarctionThe aim is to classify the free text into 4 common labelsAnd I don’t have coding experience, trying to find the way to do and get help from experts in the forumThank you"
},
{
"date": "2022-08-26T18:31:50Z",
"reply": "Hello,so here are my initial thoughts:Firstly, your “free text” is actually part of an image (what you posted is an image to a computer, not text), so before we even consider training a model, you’d need to find a way to convert the image into text. Something like OCR (thisis a free online software but there is much better around) would do the job, however you’d need to check the quality of the results, as there are things in your image which you could add noise such as |V1 |V4 etc., as well as the ECG itself, so this is the very first step.Once you’re confident that the images are correctly (within a reasonable margin, as there will be some mistakes) converted into text by your software of choice, and assuming you do not need the actual ECG trace for your prediction but just the text, I’d suggest starting from a simple multi-class classification model. Considering your 3 classes are very specific, I wouldn’t think that zero shot would work particularly well, so even if you have just a few training data I’d suggest using them all for fine-tuning. However, large models require large amounts of data to perform well, unless you’re using more niche techniques like Bayesian learning which are way outside the scope of this answer.Hereis an example of what your code could look like if you want to use transformers (which is what this forum is about) for multi class classification, however there are also other simpler NLP models available (such as SVM, random forest, Bayes classifier, logistic regression etc.), for example available from the scikit-learn python library. However, any coding activity will involve taking inspiration from others code, and making your own modifications to suit your use case, and without any coding experience I believe it would be extremely hard to “go blind” and copy others code without understanding it and getting it to work. That’s why I believe this project, for a beginner who has never coded before, would require an amount of supervision which is well outside the scope of a single question on this forum in my view. Depending on your time and dedication, I believe that a good point to start to get more familiar with these things would be to do a crash course on Python (learning the basics of the python language)andalso a machine learning course. There are plenty of free resources available online but there will be a learning curve.Regarding machine learning for beginners (but with some coding and maths understanding) I would recommendAndrew Ng’s courseson Coursera (online attendance is free). Course 1 will explain all the basics of machine learning, and Course 5 will discuss language models, which are relevant to NLP and will enable you to start writing your own sequence to sequence models.Let’s see if others have further useful suggestions, but to manage expectation, in my opinion this is a task which requires a non negligible amount of learning before it can be attempted by someone with zero prior coding experience."
}
] |
Can you make Q&A language model stay on topic? | https://discuss.huggingface.co/t/can-you-make-q-a-language-model-stay-on-topic/21828 | 0 | 483 | I’m thinking of fine-tuning a pre-trained language model for a Q&A task. More specifically, I’d like to fine-tune the model on a single chapter in a classic college textbook. Afterward, the reader of the chapter should be able to engage in a Q&A session with the model about the content of the chapter. But how do I make sure that the model stays on topic and doesn’t go out of a tangent? I know it is possible when looking at whathttps://play.aidungeon.io/has achieved, but I don’t know if it will require me to build a model from the ground for each chapter. Can anyone tell me if I’m out of my mind or if it’s feasible?Best, | 2022-08-19T12:04:58Z | [] |
How to get embedding to each n-grams from a sentence using BERT? | https://discuss.huggingface.co/t/how-to-get-embedding-to-each-n-grams-from-a-sentence-using-bert/21562 | 0 | 733 | Given a set of labels with different numbers of words, such as:labels=["computer accessories", "baby", "beauty and personal care"]Is there an approach to computing label embeddings in a single BERT forward pass (considering the list of labels as a single sentence)? Or has it the same as the computational cost of a forward pass for each label? | 2022-08-12T14:44:01Z | [] |
Public Research Survey | https://discuss.huggingface.co/t/public-research-survey/20979 | 0 | 597 | Hello, my name is Christian Flores. I am a recent UCSD graduate gathering public opinion on the impact of AI in day-to–day life for a research project. I would appreciate it if you could spare a few minutes to share your thoughts on the topic. The survey is estimated to take 5-10 minutes to complete. Your answers are anonymous. Thank you![https://ucsd.co1.qualtrics.com/jfe/form/SV_0B8t3X8WfSZep1A](https://SurveyLink) | 2022-07-28T22:26:22Z | [] |
DeBerta Paper Explained and Dissected | https://discuss.huggingface.co/t/deberta-paper-explained-and-dissected/20158 | 0 | 709 | Hello Everyone , Deberta has been ruling Kaggle competitions as well as Global Benchmarks recently , if you ever used it and wonder how it works internally , I have put together a Blog post for itjarvislabs.ai – 5 Jul 22DeBerta is the new King! | Jarvislabs.aiLearn about DeBerta architecture and find out how it outperforms the SOTA Bert and RoBerta.In this blog post I explain all the novel components that deberta introduces and all of it works together to create a performance boost .You will love it if you really love the transformers | 2022-07-08T17:29:32Z | [] |
Summary Decoding params | https://discuss.huggingface.co/t/summary-decoding-params/19711 | 0 | 585 | Hi,When applying decoding, it is common to provide decoding params , such asmin_length , max_length, beam_size , length penalty etc.I wonder if anyone is aware of a methodology or research for determining these params as well as if these could be dynamic and not hard-coded.I have found this paper, for multi-document summarization,aclanthology.orgD13-1069.pdf294.03 KBif anyone knows any additional resources - it would be highly appreciated.Thanks! | 2022-06-28T07:45:13Z | [] |
Pre-Train BERT (from scratch) | https://discuss.huggingface.co/t/pre-train-bert-from-scratch/1245 | 43 | 18,498 | BERT has been trained on MLM and NSP objective. I wanted to train BERT with/without NSP objective (with NSP in case suggested approach is different). I haven’t performed pre-training in full sense before. Can you please share how to obtain the data (crawl and tokenization details which were used) on which BERT was trained on ?. Since it takes a lot of time, I am looking for well tested code that can yield the BERT with/without NSP in one go. Any suggestions will be helpful.I know about some projects likethese, but they won’t integrate well withtransformerswell I guess which is a must have condition in my case. | 2020-09-24T13:01:31Z | [
{
"date": "2020-09-25T06:44:43Z",
"reply": "BERT was trained onbook corpusandenglish wikipediaboth of which are available indatasetlibraryhuggingface.cowikipedia · Datasets at Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science.huggingface.cobookcorpus · Datasets at Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science.Transformers has recently included dataset for for next sent prediction which you could usegithub.comhuggingface/transformers/blob/main/src/transformers/data/datasets/language_modeling.py#L258# We *usually* want to fill up the entire sequence since we are padding# to `block_size` anyways, so short sequences are generally wasted# computation. However, we *sometimes*# (i.e., short_seq_prob == 0.1 == 10% of the time) want to use shorter# sequences to minimize the mismatch between pretraining and fine-tuning.# The `target_seq_length` is just a rough target however, whereas# `block_size` is a hard limit.target_seq_length = max_num_tokensif random.random() < short_seq_prob:target_seq_length = random.randint(2, max_num_tokens)# We DON'T just concatenate all of the tokens from a document into a long# sequence and choose an arbitrary split point because this would make the# next sentence prediction task too easy. Instead, we split the input into# segments \"A\" and \"B\" based on the actual \"sentences\" provided by the user# input.examples = []current_chunk = [] # a buffer stored current working segmentscurrent_length = 0i = 0while i < len(document):and there’s also NSP head for BERThttps://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L560EDIT:BertForPreTrainingclass can be used for bothMLMandNSPwith the currentexample/languae-modelingI guess it’s only possible to either useMLMorNSP, you might need to write your own script to combine these."
},
{
"date": "2020-09-25T07:39:48Z",
"reply": "For training on MLM objective, is it recommended to usecollate_fnfromhere? Didn’t seeTextDatasetfor MLM objective."
},
{
"date": "2020-09-25T07:42:53Z",
"reply": "Masking is done usingDataCollatorForLanguageModelingso you can use any dataset and just pass the collator toDataLoader.One thing to note:DataCollatorForLanguageModelingdoes dynamic masking but BERT was trained using static masking ."
},
{
"date": "2020-09-25T07:52:20Z",
"reply": "It seems that usingBertForNextSentencePredictionwithTextDatasetForNextSentencePredictionandDataCollatorForLanguageModelingwould be equivalent to the BERT objective (except static masking part). And for dataset, we can usedatasets.concatenate_datasets()method for BookCorpus and Wikipedia. This might be close right ? Any additional details ?"
},
{
"date": "2020-09-25T09:10:05Z",
"reply": "datasets.concatenate_datasets()does not seem to work for this since features do not match. AlsoBertForNextSentencePredictionexpects afile_path. Initially I thought it was a wrapper which can takedatasetsobjects."
},
{
"date": "2020-09-25T10:25:43Z",
"reply": "It shouldn’t be hard to convertBertForNextSentencePredictionto use datasets. I played with wikipedia dataset for english just now. Each dataset entry is an article/document and it needs to be sentence tokenized inBertForNextSentencePrediction. Book corpus dataset entries seem to be sentences already. Let me know about your progress."
},
{
"date": "2020-09-25T10:27:59Z",
"reply": "How are you measuring the metric ?"
},
{
"date": "2020-09-25T10:39:47Z",
"reply": "I don’t yet. I am still setting up these training pipelines. I asked about metrics atEvaluation metrics for BERT-like LMsbut no response yet. I read athttps://huggingface.co/transformers/perplexity.htmland elsewhere that perplexity is not appropriate for BERT and MLMs. Can’t we use fill-mask pipeline and some version of masking accuracy?OTOH, I’ve already setup GLUE benchmarks withhttps://jiant.info/v2 Alpha. Excellent integration with transformers and can easily plugin any model and run benchmarks in parallel. Seehttps://github.com/jiant-dev/jiant/tree/master/examplesfor more details"
},
{
"date": "2020-09-25T10:44:05Z",
"reply": "Did you try using Cross Entropy for pre-training ? We usually use that for MLM. It can be easily used for NSP I guess."
},
{
"date": "2020-09-25T13:29:11Z",
"reply": "Indeed wikipedia has columns “text” and “title” while bookcorpus only has “text”.You can concatenate them by removing the “title” column from wikipedia:from datasets import load_dataset, concatenate_datasets\n\nwiki = load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\")\nbookcorpus = load_dataset(\"bookcorpus\", split=\"train\")\nprint(wiki.column_names, bookcorpus.column_names)\n# ['title', 'text'] ['text']\n\nwiki.remove_columns_(\"title\")\nbert_dataset = concatenate_datasets([wiki, bookcorpus])"
},
{
"date": "2020-09-25T13:33:32Z",
"reply": "Let me know if you find an appropriate way to cut wikipedia articles into sentences !Also don’t hesitate if you have any questions about dataset processing, I’d be happy to help"
},
{
"date": "2020-09-25T14:34:09Z",
"reply": "You can use spaCy or stanza for sentence segmentation. spaCy is quite a bit faster but might be less correct. If you want to I can post a segmentation function here."
},
{
"date": "2020-09-25T14:36:58Z",
"reply": "So after concatenation of wikipedia and book_corpus, next things to do is NSP. Can you suggest how that is to be done on object after concatenation happens?I do not want to diverge from the actual method which was used to pre-train BERT."
},
{
"date": "2020-09-25T14:39:24Z",
"reply": "You can have a look here:github.comhuggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1196)input_ids = torch.cat([input_ids, dummy_token], dim=1)return {\"input_ids\": input_ids, \"attention_mask\": attention_mask}@add_start_docstrings(\"\"\"Bert Model with a `next sentence prediction (classification)` head on top. \"\"\",BERT_START_DOCSTRING,)class BertForNextSentencePrediction(BertPreTrainedModel):def __init__(self, config):super().__init__(config)self.bert = BertModel(config)self.cls = BertOnlyNSPHead(config)self.init_weights()@add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format(\"batch_size, sequence_length\"))@replace_return_docstrings(output_type=NextSentencePredictorOutput, config_class=_CONFIG_FOR_DOC)"
},
{
"date": "2020-09-25T14:39:55Z",
"reply": "Has anyone replicated BERT pre-training from scratch ? It would be good to hear what exactly did they do."
},
{
"date": "2020-09-25T14:40:51Z",
"reply": "I already saw it. I tried using it, but got stuck with other things such as metric, preprocessing etc. Given that training will last for a week, there is not much scope to make errors."
},
{
"date": "2020-09-25T14:43:24Z",
"reply": "Also, is there some study or has anyone experimented what happens if we solely rely on MLM and no NSP. How much difference will that make ? RoBERTa showed that NSP didn’t prove to be useful. In this case, does involving NSP help with MLM ?"
},
{
"date": "2020-09-25T14:51:37Z",
"reply": "Well as you found, RoBERTa showed that leaving out NSP yields better results on downstream tasks. Albert then re-added a similar (yet very different) task, namely sentenceorderprediction, which improved performance on downstream tasks.PS: please don’t post multiple consecutive posts but rather edit your posts to add more information. It’s a bit annoying with the notifications."
},
{
"date": "2020-09-25T15:39:26Z",
"reply": "Quentin, I am not sure dataset itself should cut articles into sentences (unless there is an option for both articles/sentences). Perhaps other models might need entire articles as input. If needed, users can sentence tokenize articles using nltk/spacy and such. I’ll play with the wikipedia dataset in the coming days and I’ll report back to you my experiences. Also, while looking at the dataset I found references to Categories and such. Perhaps equally important objective for wikipedia dateset is to keep it as clean as possible."
},
{
"date": "No date available",
"reply": "No reply text available"
}
] |
How to fine tune fine tune GitHub Copilot? | https://discuss.huggingface.co/t/how-to-fine-tune-fine-tune-github-copilot/18889 | 3 | 3,571 | We can fine tune language models like BERT, GPT-3.Can I fine tune GitHub Copilot model?I have already looked into thehttps://copilot.github.com/but cant find the details. Would really appreciate if someone had fine tuned Github Copilot. | 2022-06-09T04:26:50Z | [
{
"date": "2022-06-09T12:34:10Z",
"reply": "Hi@neo-benjaminThe Codex model that’s powering the Copilot product is not open sourced. However, there are a few models similar to Codex available on the Hugging Face Hub such as Incoder or CodeGen:huggingface.cofacebook/incoder-6B · Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science.huggingface.coSalesforce/codegen-16B-multi · Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science."
},
{
"date": "2022-06-09T20:19:57Z",
"reply": "How to fine tune Codegen? Are the steps documented?"
},
{
"date": "2022-06-24T13:40:30Z",
"reply": "You can have a look at the language modeling examples. The should work for any auto regressive model such as GPT-2 or CodeGen:transformers/examples/pytorch/language-modeling at main · huggingface/transformers · GitHub"
}
] |
Similarity search with combined image and text? | https://discuss.huggingface.co/t/similarity-search-with-combined-image-and-text/19168 | 6 | 2,823 | How can I do similarity match by combining both image and text?Lets stay:Product1 = Image1, Text1Product2 = Image2, Text2I want to do contrastive learning by combining both the image and text.Is there such a model?Can anyone please suggest a model? | 2022-06-14T23:36:21Z | [
{
"date": "2022-06-20T06:46:37Z",
"reply": "TheSentenceTransformercan encode images and text into a single vector space. You could combine both to create a new vector space for products, and then implement contrastive learning for this vector space.Seesentence-transformers/Image_Search.ipynb at master · UKPLab/sentence-transformers · GitHub"
},
{
"date": "2022-06-20T08:17:38Z",
"reply": "Like in the notebook referenced by@raphaelmerx, I also used a pre-trained CLIP model to embed images and text in the same vector space, so you can perform semantic search:Weights & Biases."
},
{
"date": "2022-06-21T19:17:36Z",
"reply": "@raphaelmerxDo you have a sample code for contrastive learning using SentenceTransformer?"
},
{
"date": "2022-06-21T19:46:12Z",
"reply": "@raphaelmerxI understand the idea of combining the text and image into a single vector space and then implement contrastive learning.But wondering are you aware of an open source implementation for doing contrastive learning? Or code that I could adapt for this purpose."
},
{
"date": "2022-06-24T00:34:04Z",
"reply": "@raphaelmerxin the given example, you have shownmodel.encodeto encode images and text. Do you have any example how to apply that for contrastive learning?"
},
{
"date": "2022-06-24T04:35:53Z",
"reply": "I don’t have any code sample of contrastive learning no"
}
] |
LayoutLMv3 paper review and fine tuning code | https://discuss.huggingface.co/t/layoutlmv3-paper-review-and-fine-tuning-code/19495 | 0 | 1,182 | LayoutLMv3: Pre-training for Document AI with Unified Text and Image MaskingHi guys,Made a small video going through Layout LMV3 paper.Feel free to check it out. | 2022-06-23T09:13:40Z | [] |
Grouphug: multi-task, multi-dataset training with 🤗 transformers/datasets | https://discuss.huggingface.co/t/grouphug-multi-task-multi-dataset-training-with-transformers-datasets/19177 | 0 | 2,335 | I recently releasedgrouphug- a package optimized for training on multiple datasets/dataframes at once, with each containing an arbitary subset of tasks, built ontransformers/datasets.The need for this came from wanting a single model to predict many closely related things like message topic, sentiment, toxicity, etc, with the inference speed of a single model, and better accuracy.I have also found that co-training on a masked language modelling task results in models which generalize very well and do not start overfitting.Even for single-task modelling, the classification head is also a good deal more powerful than the usual default, and the dataset formatter may be useful to quickly turn your dataframes into the format needed.Would love to hear if this is useful for anyone else, and any suggestions you have! | 2022-06-15T07:26:19Z | [] |
LSTM Encoder-Decoder not working | https://discuss.huggingface.co/t/lstm-encoder-decoder-not-working/18697 | 0 | 787 | I am trying to train an LSTM Encoder-Decoder model for paraphrase generation. My model is as follows:StackedResidualLSTM(
(encoder): RecurrentEncoder(
(embed_tokens): Embedding(30522, 256)
(dropout): Dropout(p=0.5, inplace=False)
(rnn): LSTM(256, 256, num_layers=2, batch_first=True, dropout=0.5)
)
(decoder): RecurrentDecoder(
(embed_tokens): Embedding(30522, 128)
(dropout_in_module): Dropout(p=0.5, inplace=False)
(dropout_out_module): Dropout(p=0.1, inplace=False)
(layers): ModuleList(
(0): LSTMCell(384, 256)
(1): LSTMCell(256, 256)
)
(fc_out): Linear(in_features=256, out_features=30522, bias=True)
)
)Following is a print of the source sentence, the sentence fed to the decoder (shifted right), the predictions, and the true sentence (labels). Everything is tokenized with BERT tokenizer:Source: [CLS] where can i get quality services in brisbane for plasterand drywall repair? [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD]Decoder Input: [CLS] [CLS] where can i getquality services for plaster and drywall repairs in brisbane? [SEP][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]Preds:[CLS] the? [SEP]? [SEP]? [SEP]? [SEP]? [SEP]? [SEP]? [SEP]? [SEP]?[SEP]Target: [CLS] where can i get quality services for plaster anddrywall repairs in brisbane? [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD]My loss function is a CrossEntropy between the output and labels (the padding token is switched with -100 to ignore). Something like:loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1))There are two problems occurring:the loss does not go downthe generations are all the same for every entry of the same epoch (after weight updating the generations might be different than the ones from the previous epoch, but remain the same for every entry of the new epoch)Do you have any idea what might I try to fix the issue? Thanks in advance for any help you can provide. | 2022-06-03T17:15:48Z | [] |
Graph2graph network for geometric shapes | https://discuss.huggingface.co/t/graph2graph-network-for-geometric-shapes/18609 | 0 | 787 | Similar to Seq2Seq models, are there graph2graph models available?Context: I am working on a dimension reduction problem on shapes, where,shapes are represented as graph,vertices as nodes,connecting curves as edges.dimension reduction operation is called as Midcurve generation.Input is 2D profile, say a closed polygon. Example: thick ‘L’ profile on left in the image below.Output is 1D curve in the middle of the profile. Example: thin ‘L’ curve on the right in the image belowencoderdecoder885×278 22.1 KBWish to build encoder-decoder network which accepts graphs as input as well as output.I have training set of such input and output graphs, a supervised set.As I could not find ready graph2graph network, I converted the problem to image2image (say, pix2pix like) and then solving it. But wish to investigate if graph2graph network is available or not.More info:Short paper:MidcurveNN: Encoder-Decoder Neural Network for Computing Midcurve of a Thin Polygon, viXra.org e-Print archive, viXra:1904.0429Github repo, source code:GitHub - yogeshhk/MidcurveNN: Computation of Midcurve of Thin Polygons using Neural NetworksHow to build such encoder decode network? Please note that as both, input and output are different, this can not be AutoEncoder.Any ideas? | 2022-06-01T10:27:33Z | [] |
Steps to train T5 on collections of tags | https://discuss.huggingface.co/t/steps-to-train-t5-on-collections-of-tags/18600 | 0 | 665 | Hiya! I’m working on my own model of Imagen; Instead of sentence prompts, my image-pair dataset uses a series of tags to describe an image. An example would be (without quotes): “sunny_day park dog parked_motorcycle female_walking” - and there can be anywhere from a few tags to 30+ tags per image. Because Imagen uses T5 to generate embeddings, I’d need to train a T5 model from scratch based on these collections of tags instead of using transfer learning, correct? Would these tags need to be presented as an array of strings, or one large string? What else would I need to do? And if it’s possible to answer: If my dataset was about 250K, how long would it take to train a T5 large on this dataset on either a P100 or latest generation TPU? Thanks for the help! | 2022-06-01T07:58:21Z | [] |
Pegasus Paraphrase Fine Tuning dataset | https://discuss.huggingface.co/t/pegasus-paraphrase-fine-tuning-dataset/18531 | 0 | 687 | Hi@tuner007. I was wondering if you could update the model card forpegasus_paraphraseto include the dataset that you use to finetune Google’s checkpoint?Alex | 2022-05-30T16:01:51Z | [] |
Technical Skill classification model | https://discuss.huggingface.co/t/technical-skill-classification-model/18487 | 0 | 687 | A raw dataset of over 30k data points is given which contains technical skills and a lot of jargonmixed in. We need to develop a code that can clean this dataset and extract Technical (Hard) skills. Some 900 random examples of technical skills are also given to go through them to understand thepattern and sequence. How should we go about this problem? | 2022-05-29T17:34:15Z | [] |
Optuna with a fine-tuned model | https://discuss.huggingface.co/t/optuna-with-a-fine-tuned-model/18113 | 1 | 712 | How can I use Optuna to optimize a fine-tuned model? Is there any example? | 2022-05-18T17:58:01Z | [
{
"date": "2022-05-19T11:35:30Z",
"reply": "Is that a fine-tuned model like a frozen model and I can not make it better?"
}
] |
Video Classification | https://discuss.huggingface.co/t/video-classification/17995 | 0 | 799 | Hi everyone,I am starting to look into the task ofclassifying videos, trying to understand what approaches are currently available.Naively speaking, I guess one could randomly (maybe better, uniformly) sample N frames from a video, perform classification on each of them, and then aggregate predictions (most frequent prediction, most confident prediction, etc.). This may be reasonable for simple classification tasks (e.g. is there a cat in this video? Is the video set indoors or outdoors?).On the other hand, this approach would lose any temporal information conveyed by the frame sequence and the sound/speech information, for which a multi-modal model that can process sequences would be required.So I was wondering if any of you can point out examples of models that have been proposed/used for video classification in any of these directions.I tried browsing the HuggingFace directory but could not find a “video classification” task category, and I have the feeling (after some web searching) that this topic is generally less covered than image or text classification.Any pointer/suggestion is very much appreciated | 2022-05-16T10:43:24Z | [] |
Ideas for scoring coding assignments | https://discuss.huggingface.co/t/ideas-for-scoring-coding-assignments/17862 | 0 | 719 | Hey guys , I am searching for ways to use NLP to score coding/programming assignments like what a teacher will do in an exam/test. what ideas come to mind?any papers or similar problem solutions will be much appreciated ! | 2022-05-12T10:18:21Z | [] |
Dynamic Programming for Byte-level BPE | https://discuss.huggingface.co/t/dynamic-programming-for-byte-level-bpe/17376 | 0 | 863 | Could anyone explain the rationale behind equation (1) inNeural Machine Translation with Byte-Level Subwords?Besides, what does it exactly mean byThe design of UTF-8 encoding ensures the uniqueness of this recovery process: for a character UTF-8 encoded with multiple bytes, its trailing bytes will not make a valid UTF-8 encoded character?How exactly are the hexadecimal digits being derived in Figure 1 ? | 2022-05-01T02:58:55Z | [] |
Own AI deploy webapp | https://discuss.huggingface.co/t/own-ai-deploy-webapp/17247 | 0 | 791 | Hello,I am a student at the higher technicas collage Leonding in Austria.In our collage we have our own servers with kubernetes that we are allowed to use as students.Me and three colleagues have a 2 year project where we have to program a webapp where you can easily deploy and train an AI with a few clicks.Basically we should be able to do the same asAutoTrainAutonlpfrom Hugging faceonly on the servers on our school servers.I was already on Github from Hugging Face myself and looked for open source repos but didn’t really find anything.Now the question is: Can I run AutoTrain and Autonlp on the school server or are there alternatives to create such a webapp.Thank you for your answers! | 2022-04-27T11:40:47Z | [] |
Bert for audio classification | https://discuss.huggingface.co/t/bert-for-audio-classification/17179 | 0 | 1,085 | I have been thinking at a very high abstract level about using Bert for something like audio classification. Suppose I have a time series data set of sampled sounds and their labels, something like an short audio clip of a dog barking that has the label “dog_bark”. I’m wondering if it’s possible to use the Bert architecture to perform this classification?Naively, I would say that one would have to pre-train Bert from scratch since the input data is time series data represented by floats. That would also lead me to think that one would have to also reconsider how they perform the token embeddings. I don’t have any super concrete ideas, but that was where I was starting. Curious if others had similar ideas or thoughts on the matter?EDIT: I am aware that there are other models out there better suited for this that perhaps fall into ASR or audio classification like wav2vec. However, in this instance I was specifically curious about adapting bert to the task. | 2022-04-25T22:30:57Z | [] |
Confidence Scores / Self-Training for Wav2Vec2 / CTC models With LM (PyCTCDecode) | https://discuss.huggingface.co/t/confidence-scores-self-training-for-wav2vec2-ctc-models-with-lm-pyctcdecode/17052 | 1 | 2,750 | I started looking a bit into Confidence Scores / Self-Training for Speech Recognition for models like Wav2Vec2 that make use a language model usingpyctcdecode'slibraryPyCTCDecode returns alm_scorewhich can be seen as the fused score between the acoustic model (Wav2Vec2) and a language model (kenLM). This score is the sum of all per-word fusedlm_scores, so it seems reasonable to normalize the output by the number of words. Also see some questions here:confidence scores output from the LM · Issue #57 · kensho-technologies/pyctcdecode · GitHubQuestion about naming of `lm_score` parameter in `decode_logits` · Issue #63 · kensho-technologies/pyctcdecode · GitHubFirst, let’s create some Wav2Vec2 + ngram models. We’ll simply add the official 4-gram of Librispeech to the new data2vec models to create the following models:patrickvonplaten/data2vec-audio-base-10m-4-grampatrickvonplaten/data2vec-audio-base-100h-4-grampatrickvonplaten/data2vec-audio-base-960h-4-gramNow, it’s quite easy to retrieve thoselm_scoresand to compute a confidence level this way:Import all necessary libraries and load model and tokenizerfrom transformers import AutoModelForCTC, AutoProcessor
from datasets import load_dataset
import datasets
import torch
import sys
model_id = "TODO: fill in"
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)Load Librispeech dummy data:num_samples = 4
dataset = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
samples = dataset[:num_samples]
audio_samples = [s["array"] for s in samples["audio"]]
sampling_rate = set([s["sampling_rate"] for s in samples["audio"]]).pop()
text_samples = samples["text"]Predict transcription with model:# process to input_values
inputs = processor(audio_samples, return_tensors="pt", sampling_rate=sampling_rate, padding=True)
# forward inputs to model
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logitsRetrieve the per word probability normalized over number of wordsoutput = processor.batch_decode(logits.numpy(), output_word_offsets=True)
confidence_scores = [score / len(t.split(" ")) for score, t in zip(output.lm_score, output.text)]Define confidence score the length normalizedlm_scoreof the predictionfor i in range(num_samples):
print(20 * "=" + f"Output {i}" + 20 * "=")
print(text_samples[i])
print(f"{output.text[i]}: {confidence_scores[i]}")
print("\n")Cool let’s run this on the new data2vec audio models:patrickvonplaten/data2vec-audio-base-10m-4-grampatrickvonplaten/data2vec-audio-base-100h-4-grampatrickvonplaten/data2vec-audio-base-960h-4-grampatrickvonplaten/data2vec-audio-base-10m-4-gram====================Output 0====================
MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL
MISTER QUILTER IS THE APPOSELE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL:
-2.9550299660242825
====================Output 1====================
NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER
NOR IS MISTER QUILTR'S MANNER LESS INTERESTING THAN HIS MATTER:
-3.8471058156146243
====================Output 2====================
HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND
HE TELLS IS THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CRISMIIS AND ROST BEEF LOOMING BEFORE HIS SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND:
-3.115683062281252
====================Output 3====================
HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA
HE HAS GRAVED DOUBTS WHETHER SIR FREDERICK LATEN'S WORK IS RELY GREEK AFTER ALL AND CAN DESCOVER IN IT BUT LITTLE OF ROCKY ETHICA:
-4.292775884726897patrickvonplaten/data2vec-audio-base-100h-4-gram====================Output 0====================
MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL
MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL:
-1.0723093529710663
====================Output 1====================
NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER
NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER:
-2.6140757339617786
====================Output 2====================
HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND
HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND:
-1.1805021799946347
====================Output 3====================
HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA
HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LAYTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA EH:
-2.069009737832042patrickvonplaten/data2vec-audio-base-960h-4-gram====================Output 0====================
MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL
MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL:
-1.0610139720694658
====================Output 1====================
NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER
NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER R:
-3.11299682252419
====================Output 2====================
HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND
HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND:
-1.147767963941466
====================Output 3====================
HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA
HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA:
-1.870571726475313Alright, this actually seems to make some sense here! The 10m has consistently the lowest score and one can usually say that the “correcter” the sentence the better the score. The 960h model has the best scores for all butOutput 1for which the 100h also gives a better prediction.This already seems to work quite well, but would need some more experiments.There are a couple of questions, I’m not sure about:Right now the average probability per word is taken, isminormaxmaybe better? Also see:confidence scores output from the LM · Issue #57 · kensho-technologies/pyctcdecode · GitHub | 2022-04-21T11:13:34Z | [
{
"date": "2022-04-21T13:25:38Z",
"reply": "Also tried it out on a “out-of-distribution” dataset - the English version of Common Voice and it still seems to work quite well.So changing the above 2th point “Load librispeech dummy data” to the following code that loads common voice data:dataset = load_dataset(\"common_voice\", \"en\", split=\"test\", streaming=True)\ndataset = dataset.cast_column(\"audio\", datasets.Audio(sampling_rate=16_000))\n\n# iterate over dataset\ndataset_iter = iter(dataset)\nsamples = [next(dataset_iter) for _ in range(num_samples)]\n\naudio_samples = [s[\"audio\"][\"array\"] for s in samples]\nsampling_rate = set([s[\"audio\"][\"sampling_rate\"] for s in samples]).pop()\ntext_samples = [s[\"sentence\"] for s in samples]And then running the script again gives the following results:patrickvonplaten/data2vec-audio-base-10m-4-gram====================Output 0====================\nIt was the time of day when all of Spain slept during the summer.\nIT WAS THE TIME OF DAY LEVERS BEN SLEPT DURING THE SUMMER:\n-3.5796514559110606\n\n\n====================Output 1====================\nSame way you did.\nTHE SAME POINT: \n-6.560971691113143\n\n\n====================Output 2====================\nSarah told him that she was there to see her brother.\nBUT I TOLD HIM THAT SHE WAS IN TO SEE HER BROTHER: \n-1.249188184327079\n\n\n====================Output 3====================\nGalileo Galilei was the first man who observed the planet Neptune through his telescope.\nCALLILI GALLI WAS A FRESHMAN WHO ABSORVES TO PLANT NAPS THOUGH HIS TELICSCOP: \n-7.170448685148719patrickvonplaten/data2vec-audio-base-100h-4-gram====================Output 0====================\nIt was the time of day when all of Spain slept during the summer.\nIT WAS THE TIME OF DAY WHEN OLIVE'S PEN SLEPT DURING THE SUMMER: \n-1.724733290751429\n\n\n====================Output 1====================\nSame way you did.\nTHE SAME DIN YOU TIED: \n-11.673662061158192\n\n\n====================Output 2====================\nSarah told him that she was there to see her brother.\nTHERE I TOLD HIM THAT SHE WAS HERE TO SEE HER BROTHER: \n-1.3407323223953858\n\n\n====================Output 3====================\nGalileo Galilei was the first man who observed the planet Neptune through his telescope.\nGALILEO GALILEI WAS A FRESHMAN WHO OBSERVES THE PLANT NUPKINS THROUGH HIS TELECSCOPE: \n-5.179441703647934patrickvonplaten/data2vec-audio-base-960h-4-gram====================Output 0====================\nIt was the time of day when all of Spain slept during the summer.\nIT WAS THE TIME OF DAY WHEN OLIVER BEN SLEPT DURING THE SUMMER: \n-1.4758548315739513\n\n\n====================Output 1====================\nSame way you did.\nTHE BLIND YOU IN IT: \n-8.845217131011449\n\n\n====================Output 2====================\nSarah told him that she was there to see her brother.\nBUT I TOLD HIM THAT SHE WAS HERE TO SEE HER BROTHER: \n-1.3983698052694178\n\n\n====================Output 3====================\nGalileo Galilei was the first man who observed the planet Neptune through his telescope.\nGALILEO GALIDI WAS THE FIRST MAN WHO OBSERVES TO PLAN NAPTHA THROUGH HIS TELECOSCOPE: \n-4.983984955432581So the numbers here still seem to be very reasonable. Everything over -3, is quite wrong indeed and things are starting to look better below -2"
}
] |
Confidence Scores / Self-Training for Wav2Vec2 / CTC models | https://discuss.huggingface.co/t/confidence-scores-self-training-for-wav2vec2-ctc-models/17050 | 1 | 3,510 | I started looking a bit into Confidence Scores / Self-Training for Speech Recognition for models like Wav2Vec2.The most reasonable way of doing so is to do it on a per-word level basis.With the newoutput_word_offsets=Trueit’s quite easy to retrieve the logits scores of the predicted words. E.g. one could do the following:Import all necessary libraries and load model and tokenizerfrom transformers import AutoModelForCTC, AutoProcessor
from datasets import load_dataset
import datasets
import torch
import sys
model_id = "TODO: fill in"
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)Load Librispeech dummy data:num_samples = 4
dataset = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
samples = dataset[:num_samples]
audio_samples = [s["array"] for s in samples["audio"]]
sampling_rate = set([s["sampling_rate"] for s in samples["audio"]]).pop()
text_samples = samples["text"]Predict transcription with model:# process to input_values
inputs = processor(audio_samples, return_tensors="pt", sampling_rate=sampling_rate, padding=True)
# forward inputs to model
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logitsCompute probabilities (log softmax here) of predicted (argmax logits):pred_ids = torch.argmax(logits, dim=-1)
scores = torch.nn.functional.log_softmax(logits, dim=-1)
pred_scores = scores.gather(1, pred_ids.unsqueeze(-1))[:, :, 0]Retrieve the per word probability normalized over word lengthoutput = processor.batch_decode(pred_ids, output_word_offsets=True)
# add confidence
def confidence_score(word_dict, index):
probs = pred_scores[index, word_dict["start_offset"]: word_dict["end_offset"]]
return round(torch.sum(probs).item() / (len(probs)), 4)
confidence_scores = []
for i in range(num_samples):
confidence_scores.append({d["word"]: confidence_score(d, i) for d in output.word_offsets[i]})Define confidence score as minimum word probfor i in range(num_samples):
print(20 * "=" + f"Output {i}" + 20 * "=")
print(text_samples[i])
print(f"{' '.join(confidence_scores[i].keys())}: {min(confidence_scores[i].values())}")
print("\n")Cool let’s run this on the new data2vec audio models:facebook/data2vec-audio-base-10m · Hugging Facefacebook/data2vec-audio-base-100h · Hugging Facefacebook/data2vec-audio-base-960h · Hugging FaceIt should be clear that the 960h should have “more” confidence than the 100h model. However the outputs are as follows:facebook/data2vec-audio-base-10m====================Output 0====================
MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL
MISTER QUILTER IS THE APPOSELE OF MIDL CLASES AND WHE ER GLAD TO WELCOME HIS GASPLE: -0.5873
====================Output 1====================
NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER
NOR IS MISTERE QUILTR'S MANER LES INTRESTING THAN HIS MATER: -0.4173
====================Output 2====================
HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND
HE TELES IS THAT AT THIS FESTIVE CESON OF THE YEAR WITH CRISMIIS AND ROST BEF LOOMING BEFOR SEIMILIYS DRAWN FROM EATING ITS RESALTS OCARE MOST REDHILY TO MIND: -0.0
====================Output 3====================
HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA
HE HAS GREAVED DOUBTS WETHER SIR FREDRICK LATEN'S WORK IS RELY GRE AFTER ALL AND CAN DESCOVER IN IT BUT LITTLE OFE ROCKY ETHICA: -0.0006facebook/data2vec-audio-base-100h====================Output 0====================
MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL
MISTER QUILTER IS THE APOSTLE OF MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL: -0.7656
====================Output 1====================
NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER
NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER: -0.5057
====================Output 2====================
HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND
HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE SIMILES DRAWN FROM EATING ITS RESULTS OCCUR MOST READILY TO MINE: -0.0
====================Output 3====================
HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA
HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LAYTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHICA EH: -0.0facebook/data2vec-audio-base-960h====================Output 0====================
MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL
MISTER QUILTER IS THE APOSTLE OF MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL: -0.938
====================Output 1====================
NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER
NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER RR: -0.6415
====================Output 2====================
HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND
HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE SIMILES DRAWN FROM EATING ITS RESULTS OCCUR MOST READILY TO MIND: 0.0
====================Output 3====================
HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA
HE HAS GRAVE DOUBTS WHETHER SIR FREDERIC LEYHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA: -0.0Now as can be seen that this doesn’t seem to be too useful. Incorrect text is predicted with very high confidence by the10mmodel and there is no difference between the 960h and the 10m model really at all nor between correctly and incorrectly predicted sentences.There are a couple of questions, I’m not sure about:Is it even possible to do confidence scoring without a language model for ASR?Should the minimum (lowest prob) of all words be taken as the confidence of the transcription or the average?Should the word prob correspond to a length normalized log_sum or not normalized? | 2022-04-21T10:57:24Z | [
{
"date": "2022-04-21T11:14:14Z",
"reply": "Using a LM in addition to Wav2Vec2 definitely seems to be better here! SeeConfidence Scores / Self-Training for Wav2Vec2 / CTC models With LM (PyCTCDecode)"
}
] |
Text to Speech Alignment with Transformers | https://discuss.huggingface.co/t/text-to-speech-alignment-with-transformers/16166 | 2 | 4,236 | Hi there,I have a large dataset of transcripts (without timestamps) and corresponding audio files (avg length of one hour). My goal is to temporally align the transcripts with the corresponding audio files.Can anyone point me to resources, e.g., tutorials or huggingface models, that may help with the task? Are there any best practices for how to do it (without building an entire system from scratch)?My initial naive idea was to use a STT model to transcribe the audio (while documenting timestamps) and then performing some kind of similarity search with the transcript to align the two. However, I feel this approach might be quite error prone.I am happy for any kind of help/pointer.Simon | 2022-03-28T14:00:56Z | [
{
"date": "2022-04-19T13:10:30Z",
"reply": "This task is called Forced Alignment and there are reasonably mature tools to do it with classical approaches. I’d suggest perusingforced-alignment · GitHub Topics · GitHub.If the accuracy of the classical methods isn’t good enough for you, you can peruse research papers on, say,Speech | Papers With Code"
},
{
"date": "2022-04-20T07:17:19Z",
"reply": "Thank you so much for the reply! Currently, I’m starting to experiment withaeneas, however, I realize that the quality of my sound files is indeed very poor. Is it generally worthwile to try to improve the sound quality or would it be more fruitful to directly train/fine-tune a model to work with poorer sound quality end-2-end?"
}
] |
Projected gradient descent on autoregressive models | https://discuss.huggingface.co/t/projected-gradient-descent-on-autoregressive-models/16975 | 0 | 816 | I am doing text summarization along with a trained classifier (that gives a label to a outputted summarization), and I would like to find how far away certain classifier labels are from each other by using some adversarial attacks and visualizing it for summarizer’s encoder embeddings. Is there any part of the huggingface library focusing on doing i.e. projected gradient descent on (autoregressive) decoder to encoder embedding? | 2022-04-19T14:22:44Z | [] |
Compute metric on Dev | https://discuss.huggingface.co/t/compute-metric-on-dev/16836 | 1 | 793 | Hello, I was wondering why notebooks compute blue or rouge on dev data, not on test data? like thisnotebookdev1398×298 8.97 KB | 2022-04-14T14:52:20Z | [
{
"date": "2022-04-15T10:15:03Z",
"reply": "It is common in deep learning to train on a train set, and monitor the loss of a validation set every x epochs or steps, as is done here. This way, you get an intuition of the model’s performance, particularly whether it is overfitting. If the training loss is very low but the validation is high, your model is overfitting. So the dev data here does not give you official test results, since the model is a bit biased towards that dev data: you keep training as long as the training loss and validation loss decreases.To probe the final performance of your model, you still test it on a held-out set that has never been used before (the test set). In Tensorflow, AFAIK, you can then evaluate on this unseen test set withmodel.evaluate."
}
] |
Text similarity not by cosine similarity | https://discuss.huggingface.co/t/text-similarity-not-by-cosine-similarity/8766 | 3 | 4,230 | Hi all,I have a question.I have a dataset containing questions and answers from a specific domain. My goal is to find the find the X most similar questions to a query.for example:user: “What is python?”dataset questions: [“What is python?”, “What does python means?”, “Is it python?”, “Is it a python snake?”, “Is it a python?”]I tried encoding the questions to embeddings and calculate the cosine similarity but the problem is it gives me high similarity score for “Is it python?” for the query “What is python?” which is clearly not the same question meaning and for “What does python means?” get very low score compared to “Is it python?”Any suggestions how i can overcome this problem? maybe new approaches… | 2021-07-28T13:06:30Z | [
{
"date": "2021-07-29T01:55:59Z",
"reply": "if cosine similarity is not giving you the results you want, you could try a different metric like euclidean / manhattan / minkowski distance or jaccard similarity.alternatively you could try changing the embedding model to see if that improves the comparisons"
},
{
"date": "2021-10-29T14:29:27Z",
"reply": "What you are trying to do is clearly one of theGLUE tasks:3.2 SIMILARITY AND PARAPHRASE TASKSMRPC The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whetherthe sentences in the pair are semantically equivalent. Because the classes are imbalanced (68%positive), we follow common practice and report both accuracy and F1 score.QQP The Quora Question Pairs2 dataset is a collection of question pairs from the communityquestion-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent. As in MRPC, the class distribution in QQP is unbalanced (63% negative), so wereport both accuracy and F1 score. We use the standard test set, for which we obtained private labelsfrom the authors. We observe that the test set has a different label distribution than the training set.STS-B The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentencepairs drawn from news headlines, video and image captions, and natural language inference data.Each pair is human-annotated with a similarity score from 1 to 5; the task is to predict these scores.Follow common practice, we evaluate using Pearson and Spearman correlation coefficients.What I suggest you to do, is to follow thefollowing tutorialto pre-train your model on the dataset that is the most similar to what you are trying to do (ex: GLUE, QQP instead of GLUE MRCP in the tutorial)There is even available Leaderboard where you can find which model perform the best on QQP."
},
{
"date": "2022-04-12T05:52:17Z",
"reply": "These are not definitive solutions but experiments I’ve tried with vectorized representations and I’ve had some success:Definitely try Dot product. In my limited experience dot product has always given superior results to other metrics. There are reasons why metrics like euclidean might fail, things get freaky and weird when we’re extending our 3-dimensional intuition to 100 dimensions. However, experimentation is going to make you wiser.Refer to the first WordVectors paper where they do experiments like adding and subtracting vectors like concepts. For example, v(king) - v(man) + v(woman) is close to v(queen). These experiments are not perfect and I remember reading a paper stating a proposition that this kind of adding and subtracting is flawed which might have some merit. However, they’ve worked in a limited capacity for me. So, experiments like:v(What is python?) - v(What) + v(How) might lead you near places where python questions with How.v(x) refers to the vector of x"
}
] |
Aggregate encoder states in encoder-decoder models for long sequences? | https://discuss.huggingface.co/t/aggregate-encoder-states-in-encoder-decoder-models-for-long-sequences/16625 | 0 | 719 | Hi. I would like to train a text-to-text QA model for long documents.I was wondering if anyone has seen success in aggregating the encoder states of a long document in any way (e.g. pooling) before passing it to the decoder, similar to the sliding window technique done for e.g. classification with BERT. I’m well aware of models like the longformer, etc, but just wondering if this approach has any utility, and if not, why not? | 2022-04-08T18:39:37Z | [] |
Why do the commit histories of Hugging Face's datasets and models appear recent? Weren't these datasets and models uploaded a while ago? | https://discuss.huggingface.co/t/why-do-the-commit-histories-of-hugging-faces-datasets-and-models-appear-recent-werent-these-datasets-and-models-uploaded-a-while-ago/16595 | 2 | 865 | I’ve been going over Hugging Face for research purposes, we’ve been looking over several datasets and models. We started doing this sometime ago last year during September and October. However, recently I checked this again, and it seems like these commit histories have changed.For instance, we looked overgemback in October of last year, but it shows that its commit history started on Jan 25.I am using Hugging Face as a use case for research purposes, I want to publish this research eventually, however people will ask questions about this, so I was wondering if someone could offer an explanation for this. | 2022-04-07T18:32:42Z | [
{
"date": "2022-04-08T07:25:52Z",
"reply": "Many times, people do changes in the repositories metadata to help with discoverability or consistency across models/datasets. Other times, the model card or dataset sheet are extended/improved. For example, the last change of GEM -Update files from the datasets library (from 1.17.0) · gem at d5a0674- is just fixing a typo in the dataset metadata"
},
{
"date": "2022-04-08T15:50:37Z",
"reply": "I see, but what does that imply for the commit histories? Do these updates mean that the commit histories are replaced?For the record, I just want to know in order to log this information as a justification for the procedures we’re taking. We’ve been observing the commit histories and want to be able to explain why they may change over time."
}
] |
Incorporating structural information in a Transformer? | https://discuss.huggingface.co/t/incorporating-structural-information-in-a-transformer/16554 | 0 | 712 | For a Neural Machine Translation (NMT) task, my input data has relational information. This relation could be modelled using a graphical structure. Some researchers have tried to exploit transformer for graph data. For example, here is onepaper.I want to use Transformer. But then the challenge is how can I embed structural information there? Is there any open source artefact for Relational Transformer that I can use out of the box? | 2022-04-06T19:50:25Z | [] |
Can you use both copy mechanism and BPE for a NMT task? | https://discuss.huggingface.co/t/can-you-use-both-copy-mechanism-and-bpe-for-a-nmt-task/16531 | 0 | 707 | I read to alleviate the problem of Out of Vocabulary (OOV), there are two techniques:BPECopy mechanismIt appears to me they are two orthogonal approaches.Can we combine the two, i.e., we use both the copy mechanism and BPE? Are there any work out there that combines the two? I cant find any. | 2022-04-06T11:44:44Z | [] |
Is there an easy way to apply layer-wise decaying learning rate in huggingface trainer for RobertaMaskedForLM? | https://discuss.huggingface.co/t/is-there-an-easy-way-to-apply-layer-wise-decaying-learning-rate-in-huggingface-trainer-for-robertamaskedforlm/1599 | 3 | 2,765 | I am pre-training RobertaMaskedForLM on my own custom dataset. I wanted to implement the layer-wise learning rate decay given inhttps://github.com/aws-health-ai/multi_domain_lm#learning-rate-controlcorresponding to the paper -An Empirical Investigation Towards Efficient Multi-Domain LanguageModel Pre-training. Is there an easy way to incorporate this decay of learning rate with layer depth towards input usingtransformers.Trainer? | 2020-10-17T09:31:45Z | [
{
"date": "2020-11-14T04:14:27Z",
"reply": "I have the same question"
},
{
"date": "2020-11-16T13:57:46Z",
"reply": "There is nothing in the lib for this, but you can pass your own optimizer and scheduler."
},
{
"date": "2022-04-05T09:01:27Z",
"reply": "Hello, I have the same question. I’m fine-tuning RoBERTa large for RE(Relation Extraction) task andthe paperI referenced usedlayer decay.It seems like I have to custom my own optimizer and scheduler for layer-wise learning rate decay. Could you tell me how you implemented your own scheduler?"
}
] |
The discussion is about entity recognition and corefrence resolution | https://discuss.huggingface.co/t/the-discussion-is-about-entity-recognition-and-corefrence-resolution/16068 | 0 | 710 | Input = " I need 3 chairs in each class and there are 10 classes, so i need 30 chairs"output = “30 chairs, 10 classes”i have used the concept of corefrence resolution and entity recognition but i am unable to simplify the statement enough so that the input can become " i need 30 chairs and 10 classrooms" or similar linguistic approach which would help me to use entity recognition and solve this problem statement. | 2022-03-25T10:54:24Z | [] |
GPT2 for QA Pair Generation | https://discuss.huggingface.co/t/gpt2-for-qa-pair-generation/759 | 9 | 8,489 | I was wondering if it were possible to somehow train GPT2 to generate question-answer pairs in a particular domain? | 2020-08-18T21:59:56Z | [
{
"date": "2020-08-19T09:08:31Z",
"reply": "I’ve tried this with seq2seq models. I have worked on qa pair generation (separately) using T5 with descent results. You can find ithere.One way we can do this with GPT-2 is prepare our input like thisOur context is42 is the answer to life, the universe and everything, answer is42and target question isWhat is the answer to life, universe and everything ?Theninput text:context: 42 is the answer to life, the universe and everything. question: What is the answer to life, universe and everything ? answer: 42and prepare the attention mask such that, there will be no attention fromquestion: ...part, so model won’t look into future tokens and calculate loss only on thequestion: ...part. And it inference time we will feed only the context part and ask the model to generate the question.This just one one way I can think of the of my mind. Feel free to correct me if this is wrong."
},
{
"date": "2020-08-19T18:37:53Z",
"reply": "@valhallaThanks for your response. That’s an interesting approach! Does that still require humans to create training “context” strings for gpt2?"
},
{
"date": "2020-10-12T19:51:20Z",
"reply": "@valhallaIf I understand this correctly:The input text will look likecontext: 42 is the answer to life, the universe and everything. question: What is the answer to life, universe and everything ? answer: 42Mask out thequestionpart so the new text will look likecontext: 42 is the answer to life, the universe and everything. <BIG MASK> answer: 42That is what gets fed as input text into the GPT2 modelDoes this mean I define thelabelsinto the model as the text that is masked?"
},
{
"date": "2020-10-13T07:25:40Z",
"reply": "By mask, I meantattention_mask, theattention_maskshould be zero on the text you want to predict, so the model won’t peek into future.So if you want to generate question and answer, then the question and answer tokens should have 0in attention mask."
},
{
"date": "2020-10-13T11:19:41Z",
"reply": "Ah yes, sorry for my misunderstanding. So we mask out the parts we want to predict by setting theattention_maskof those tokens to 0.With these tokens masked inattention_mask, do we then pass it and the input string to GPT2 and train it with the language model head with no labels?"
},
{
"date": "2020-10-13T15:06:14Z",
"reply": "You’ll still need to passlabelsfor training.Training will be same as training any GPT-2 model, only difference is theattention_mask"
},
{
"date": "2020-10-13T15:53:54Z",
"reply": "If I only wanted to generate questions, would I set theattention_maskfor those tokens to 0 and use their text as thelabels? Something like:from transformers import GPT2Tokenizer\n\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\ndef my_data_collator(text_str):\n encoded_results = tokenizer(text_str, padding=True, truncation=True, return_tensors='pt',\n return_attention_mask=True)\n enncoded_results['attention_mask'] = set_my_attention_mask(encoded_results) #function to set attention mask to 0 on tokens in the question:... part of text_str\n label_ids = get_my_label_str(encoded_results['input_ids']) #function to return list of token ids for question:... part of text_str\n\n batch = {}\n batch['input_ids'] = encoded_results['input_ids']\n batch['past'] = None\n batch['attention_mask'] = encoded_results['attention_mask']\n batch['position_ids'] = None\n batch['head_mask'] = None\n batch['inputs_embeds'] = None\n batch['labels'] = label_ids\n batch['use_cache'] = True\n return batch\n\ntext_str = 'context: 42 is the answer to life, the universe and everything. question: What is the answer to life, universe and everything ? answer: 42'Andbatchwould get passed to aGPT2LMHeadModel?"
},
{
"date": "2020-10-13T16:09:11Z",
"reply": "This seems correct. One more thing to add, you can calculate loss only on thequestion: ...part.To do this setlabelsto -100 for tokens before thequestion:part, so cross entropy will ignore it.Also you won’t need to explicitly set some arguments (position_ids,head_masketc) toNone.They are by defaultNoneso it’s okay if don’t pass them. Will make the code more cleaner."
},
{
"date": "2022-03-23T17:27:21Z",
"reply": "@valhallaif we set the context labels to -100, this will make the model ignore the context while training. In other words, the generation of the questions won’t be based context-based. Am I right?"
}
] |
Converting Test Case Description into Test case Steps | https://discuss.huggingface.co/t/converting-test-case-description-into-test-case-steps/15332 | 0 | 765 | Hello Everyone,I am looking for model or an approach which can help us converting a test case scenario description into testcase steps for Web Functional Testing. This will help an user to follow the instructions mentioned in the test steps in order to execute a testcase for the web.For example:Test Case:Verify that Feature creation is available within Portfolio Items on RallyTest Description: (Input)I login towww.rally.comand enter username and password and click on submit button so that it takes me to the Dashboard pageI click on Epic Delivery and select SkynetI click on Portfolio and select Portfolio Items so that Features table is displayedOutputNoTest StepsValidation Steps1Login towww.rally.com2Enter Username3Enter Password4Click on submit buttonDashboard Page is displayed5Click on Epic Delivery6Select Skynet7Click on Portfolio8Select Portfolio ItemsFeatures table is displayedPlease point me in some direction where I’ll be able to achieve some results.Thanks | 2022-03-04T02:43:46Z | [] |
Best Pre-training Strategy | https://discuss.huggingface.co/t/best-pre-training-strategy/15307 | 0 | 740 | Hey community, I hope you’re models are converging fastI’m trying to pre-train a BERT model on short query sentences/words, and i’m wondering what’s the best pre-training strategy to adapt in this situation?Thank in advance. | 2022-03-03T12:09:18Z | [] |
Relative Position Representation/Encoding for Transformer | https://discuss.huggingface.co/t/relative-position-representation-encoding-for-transformer/15018 | 0 | 1,879 | InGPT-NeoX-20B: An Open-Source Autoregressive Language Modelpaper, why didthe authorstated thatRotary embeddings are a form of static relative positional embeddings?InHow Self-Attention with Relative Position Representations works | by ___ | Medium, could anyone explain the rationale behindthe value of the lookup indices after the 3rd element are all 6?What is the actual purpose ofskewing mechanism?image711×801 102 KBimage1738×598 134 KB | 2022-02-22T08:45:31Z | [] |
How find idea for academic thesis? | https://discuss.huggingface.co/t/how-find-idea-for-academic-thesis/14933 | 2 | 861 | How can I find some idea regarding NLP tasks for graduate thesis? | 2022-02-19T21:57:09Z | [
{
"date": "2022-02-19T22:23:57Z",
"reply": "HelloThis is a great question!I think the questions you should ask yourself, in order of precedence, are:What interests you in NLP? Is there a question that interests you that you couldn’t find a decent answer for in the literature? (Semantic Scholar is great if you’re looking to browse through papers)Does your adviser have any interesting ideas?If you are part of an NLP lab, what are your lab-mates working on? Is there a part of their research you can expand?If you know a language other than English, can you create a resource and model for a task in that specific language?Can you continue someone else’s work?Given the above, I think the most important thing is: your research should be interesting and fun. You want a subject that you’ll get up in the morning and say “I can’t wait to solve this already!” Yes, it will have its frustrating moments, when things don’t quite work, but it’s part of the journey. If your thesis subject bores you, it’s time to change the subject.Hope this helps a bit, and good luck with your thesis!"
},
{
"date": "2022-02-19T22:36:00Z",
"reply": "Yes, and those are very good questions. I like this:Given the above, I think the most important thing is: your research should be interesting and fun. You want a subject that you’ll get up in the morning and say “I can’t wait to solve this already!” Yes, it will have its frustrating moments, when things don’t quite work, but it’s part of the journey. If your thesis subject bores you, it’s time to change the subject.The thing is that I am looking for trends to follow them. As in our lab, I am the only one interested in NLP.Now, I am studying some review papers to understand the area or maybe trends but needed to find the pioneer labs to follow them."
}
] |
Extractive oracle | https://discuss.huggingface.co/t/extractive-oracle/14548 | 0 | 802 | Is there any official script for an extractive oracle using huggingface’s implementation of ROUGE?An extractive oracle extracts from the source the N sentences that maximize ROUGE-2 (typically). For example,thisscript computes such extractive oracle. However, since they use a different implementation of ROUGE, this might not be completely in line with my other experiments (which use the HF implementation).Thanks! | 2022-02-09T12:37:40Z | [] |
A Survey to Understand Challenges of Deploying Text Classification | https://discuss.huggingface.co/t/a-survey-to-understand-challenges-of-deploying-text-classification/14345 | 2 | 933 | Hello everyone,As more and more machine learning libraries are developed, it becomes much easier to build a text classifier. However, there are still a lot of challenges, ranging from collecting the training data, achieving high accuracy, making the classification fair for different groups of users, defending against malicious input, etc. As a group of researchers from MIT, we are curious about what are the challenges for industrial practitioners currently having to deploy text classifiers. If you have experience in deploying text classifiers, I wish you can spend 15-20 minutes filling out this survey to help us understand the challenges. You will also enter a lottery for a $25 gift card.Link to the SurveyWhy should you participate?Have you ever encountered a problem when deploying a text classifier, and could not find a good solution? We believe that there are common problems in the deployment process, while some of them could have a better solution. Our research is to understand the actual challenges in the deployment of text classifiers, and to establish connections between these challenges and future academic research. We will summarize the results into a position paper to call on researchers’ attention to solving problems in the practical deployment. | 2022-02-02T18:36:18Z | [
{
"date": "2022-02-02T21:12:23Z",
"reply": "Hey, nice survey. I just filled it out.What I was missing, though, is more information about the researchers behind this survey, likeMIT, AI Lab XYZ and the names of a couple of people. I think adding this information to the survey would make it much more trustworthy and likely that people fill it out.Also, when will you share the results of the survey?"
},
{
"date": "2022-02-08T19:51:00Z",
"reply": "Hi,Thank you for your quick response. I’m Lei Xu fromMIT Data to AI Lab. This survey is part of my PhD research on deployable and robust text classification.About the timeline, it highly depends on how many responses we can collect. We are targeting at summarizing the results into a research paper in a three-month timeline. I’ll keep you updated."
}
] |
Question Answering model on mathematical domain for the greek language | https://discuss.huggingface.co/t/question-answering-model-on-mathematical-domain-for-the-greek-language/14300 | 0 | 803 | Hello to everyone. I want to build a chatbot for my students in order to answer mathematical questions for the greek language. What I want to use is a question answering bert model or a sentence pair similarity. After trying various multilingual models pretrained on the question answering task on close domain i didnt have any luck, mainly because the text has specific mathematical terminology, complementary angles, supplementary angles e.t.c.I have found a greek language bert modelnlpaueb/bert-base-greek-uncased-v1 · Hugging Facewhich was trained on the greek part of wikipedia. Should I use this model and fine tune it on the greek part of Wikipedia on articles containing mathematical text and the training that model for the question answering task? And if this is the case does anyone know any question answering dataset for the greek language like the squad dataset? If my understanding is correct auto translate the squad dataset won’t give good results since after translation tha starting point of the answer may have changed. I would appreciate if someone could give me some quide lines to follow. | 2022-02-01T13:33:40Z | [] |
Finetuning German BERT for QA on biomedical domain | https://discuss.huggingface.co/t/finetuning-german-bert-for-qa-on-biomedical-domain/500 | 2 | 986 | Hello there and thank you very much for this wonderful work. I am relatively new to this field, so please bear with my amateur question. I want to perform question-answering on a German Biomedical text. From what I understand up to now, I need to fine-tune German BERT on biomedical QA datasets. Is there any script/pipeline that I should be using for this?Thank you very much in advance. | 2020-07-28T09:01:21Z | [
{
"date": "2020-07-28T13:07:49Z",
"reply": "There is an example of script finetuning a model on question answeringhere, hope it can help!"
},
{
"date": "2022-01-30T06:36:45Z",
"reply": "Here’s the updated linkfor QA examples"
}
] |
[Suggestions and Guidance]Finetuning Bert models for Next word Prediction | https://discuss.huggingface.co/t/suggestions-and-guidance-finetuning-bert-models-for-next-word-prediction/14043 | 4 | 4,527 | Problem Statement : To produce a next word prediction model on legal text. The aim is to build an autocomplete model which will make use of existing typed text as well as a possible concatenation of vectors from prior clauses/paragraphs.Current Approach: Because Bert based model are based on masked language, pretrained models such asLegalBertdid not produce good accuracy for prediction of next word when the word to be predicted was marked as [MASK]. Here is an example sentence, “use of [MASK]” where “marked” is the next word to be predicted in place of “[MASK]” token. (Note that there would not be words present after the mask token, only before the token).Currently approaching the problem as a SequenceClassification problem where labels are the token ids of the words that are to be predicted next. Will also attempt to finetune gpt2 on the legal text using run_clm.py from huggingface examples directoryIs there a better way to approach this problem of next word prediction?Any suggestions and guidance would be welcome.Thank you in advance | 2022-01-24T11:15:47Z | [
{
"date": "2022-01-24T13:02:21Z",
"reply": "Hi Sumanth! I believe you are already on the right track by finetuning gpt2. The difference is that GPT was trained using causal/autoregressive attention. It means that GPT is specifically trained to predict the next word without having access to the word to the right of the masked token (unlike BERT).The different models and their architectures are depicted in this chart:Capture684×642 56.9 KBLong story short - you should see better results with GPT2. Let us know how it goes.CheersHeiko"
},
{
"date": "2022-01-25T15:51:15Z",
"reply": "Hey, Thanks for the prompt reply. Will focus my attempts more on autoregressive models."
},
{
"date": "2022-01-26T13:44:33Z",
"reply": "@marshmellow77a question. Is there a way to finetune and use T5 or BigBird for this Next word prediction task?. Unable to find tutorials for using these models for Next word prediction."
},
{
"date": "2022-01-26T15:11:48Z",
"reply": "Yes, and it is actually pretty easy thanks to a script provided by Hugging Face:transformers/run_clm.py at master · huggingface/transformers · GitHubYou can use this script to finetune models for causal language modeling (i.e. next word prediction) on a text file or a dataset."
}
] |
Suggestions for an open source tagging tool to build custom LayoutLMv2 datasets | https://discuss.huggingface.co/t/suggestions-for-an-open-source-tagging-tool-to-build-custom-layoutlmv2-datasets/14103 | 0 | 897 | Any suggestions on an Open Source tagging tool to get data in the format expected by the LayoutLMv2 model? I take it the standard format is similar to theFUNSD dataset. | 2022-01-25T16:36:55Z | [] |
Paper Notes: Deepspeed Mixture of Experts | https://discuss.huggingface.co/t/paper-notes-deepspeed-mixture-of-experts/13908 | 2 | 2,139 | SummaryThe legends over at DeepSpeed released apaperon scaling Mixture of Experts with a bunch of cool ideas.Since they will probably release some pytorch code soon I wanted to summarize/discuss the findings so that I learn them better.I provide 0 background on Mixture of Experts, assume knowledge of Top1 vs Top2 gating, for selfish/lazy reasons. Read thedeepspeed blog postfor background.I abstract the term “acc” to encompass all types of metrics: validation perplexity, zero shot accuracy, etc.I used@srushtrick of trying to read critically (to get your brain to think harder about other peoples’ results) but I don’t want to come off as too negative. I really enjoyed this paper and am excited to read the code!The DeepSpeed team proposes:(a) (sec 4.1) architectural modifications that reduce the number of experts without hurting acc.(b) (sec 4.1) Moe 2 Moe distillation, (instead of MoE 2 dense distillation like the FAIR paper (appendix Table 9) and theSwitch paper)(c) (sec 5) Systems Optimizations to make inference fastImproved Communication Collectives for MoE Inference (hierarchical all2all)tutelstyle single-device kernels to make routing tokens to experts fast.4D parallelism!?I now cover architecture and distillation, and save systems optimizations for later because I don’t fully understand them yet.Architecture: Pyramid Residual MoEThis section is really well written. It contains two very nice ablations that motivated the changes:Phenomenon 1: “Pyramid”We compare the performance of two different half-MoE architectures. More specifically, we put MoE layers in the first half of the model and leave the second half’s layers identical to the dense model. We switch the MoE layers to the second half and use dense at the first half.The results show that deeper layers benefit more from large number of experts.This also saves a ton of parameters: 40% reduction at 1.3B dense equivalent size, which will be useful at inference time.Phenomenon 2: “Residual”we can achieve the benefit of using two experts per layer but still use one communication.They frame this as trying to get the benefits of top2 routing without the costs.But, basically MoeLayers become only half sparse – a dense ffn that process the input as does 1 expert – the results are added.Compared to top2 where 2 different sparse experts process the input, this is cheaper because there is less communication (you only need to send the input to 1 place instead of 2?)Note this does not improve acc compared to top2, just speed.Putting it all together:FAIR arch (see table 1) (52B Params)Layers: top2 gating (each token gets routed to 2 experts)512 experts at each MoE layerDeepspeed Arch: (31B params)Layers: each token processed by dense FFN and 1 expert (same FLOPs as top2 gating if same number of experts, I believe).pyramid: somewhere between 32 and 128 experts at each Moe layer – way fewer params!In terms of acc, (PIQA is the only overlapping evaluation),the 31B Deepspeed performs between the FAIR 52B and the FAIR 207B and was probably lower training cost than the 52B, even before all the systems optimizations in section 5. Nice!With the systems optimizations they say training is 5x faster than dense (to the same acc). The FAIR paper says “4x faster than dense”, but measures TFLOPS, which make the extra communication required for MoE appear to be free. So all in all this definitely seems like a better architecture.It would have been cool if Tables 2,4 had training cost and inference cost next to the few shot performances (or 1 big joined table somewhere!).Staged Knowledge Distillation: Mixture Of Students (MoS)Caveat before you read this section: in most distillation results, the student model is MUCH smaller than the teacher model, like half as large or so. Here, the student model is only 12.5% smaller than the teacher model. (3 fewer layers, 4B fewer params (31B vs 27B)).They are able to lose very little performance, which is nice, but they also didn’t really lose that much weight, and it would be interesting to try to replicate what they did with smaller students.Caveat 2: name deeply misleading. It’s normal KD but they switch to cross entropy loss halfway through that’s it!Anyways, these are the first published MoE 2 MoE Distillation results. The switch paper and FAIR paper both distill Moe 2 Dense models (since they are much easier to serve than MoE models, a gap deepspeed claims to eliminate in section 5 – the one I don’t understand yet:( ).They use the same KD loss as the other papers, but they turn it off halfway through training.They say this improves acc, but I am most interested in the speed implications. I tried MoE2MoE distillation but it was extremely slow (like 10x slower than Dense2Dense) because of teacher inference every step.If we could only run the teacher forward pass for part of the student training, that would be sweet!NextLet me know any inaccuracies, important omissions, what you ate for lunch follow up ideas!Next week I will try to tackle Section 5 (Systems optimizations) and if I don’t I will burn a 20 dollar bill and record it! | 2022-01-19T21:19:55Z | [
{
"date": "2022-01-20T13:57:06Z",
"reply": "What is 4D parallelism?"
},
{
"date": "2022-01-20T16:42:26Z",
"reply": "sshleifer:Next week I will try to tackle Section 5 (Systems optimizations) and if I don’t I will burn a 20 dollar bill and record it!I’ll hold you to it@sshleifer=)"
}
] |
Using mixup on RoBERTa | https://discuss.huggingface.co/t/using-mixup-on-roberta/306 | 6 | 2,192 | Hello everyone!I tried to apply the technique of data augmentation, mixup, popularly used on computer vision, but in this case on NLP.The algorithm developed is in two phases:The first phase gets the representation for each sentence of the batch, computing the mean of the correspondent hidden states of the last layer. The fragment below shows the corresponding module.class LanguageModel(nn.Module):
def __init__(self, pretrained_model_name, device="cuda:0", anonymized_tokens=False):
super(LanguageModel, self).__init__()
# Load tokenizer
self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name)
# Load model
self.config = AutoConfig.from_pretrained(pretrained_model_name)
self.config.output_hidden_states = True
self.model = AutoModel.from_pretrained(pretrained_model_name, config=self.config).to(device)
def forward(self, input_ids, attention_mask):
outputs = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
)
activations = torch.mean(outputs[0], axis=1)
return activationsAfter that, it generates the mixup examples using the function proposed on theoriginal code, but being the input, the representations computed on the previous step, instead the images, like originally.One time the mixup examples are generated, the second phase realizes the predictions (the fragment below shows the corresponding module). Finally, is computed the loss, in the same way as in the original work.class ClassifierLayer(nn.Module):
def __init__(self, num_classes, dropout_rate=0.1, petrained_size=768, device="cuda:0"):
super(ClassifierLayer, self).__init__()
self.layer = nn.Linear(petrained_size, num_classes, bias=True).to(device)
self.drop = nn.Dropout(dropout_rate)
def forward(self, z):
activations = self.layer(self.drop(z))
return activationsIn the fragment of the code below, is shown a summary of the training loop proposed, however the full script used ishere:for idx_epoch in range(0, args.num_train_epochs):
language_model.train()
classifier_layer.train()
accs = 0; ps = 0; rs = 0; f1s = 0; lss = 0
for (idx_batch, train_batch) in enumerate(train_dataloader):
# 0: input_ids, 1: attention_mask, 2:token_type_ids, 3: labels
batch_train = tuple(data_.to(device) for data_ in train_batch)
labels_train = batch_train[-1]
inputs = {
'input_ids': batch_train[0],
'attention_mask': batch_train[1],
}
optimizer.zero_grad()
# 1st phase: conextual embeddings
contextual_embeddings = language_model(
input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask'],
)
# 2nd phase: mixup
inputs, targets_a, targets_b, lam = mixup_data(contextual_embeddings, labels_train, args.alpha_mixup, use_cuda)
inputs, targets_a, targets_b = map(Variable, (inputs, targets_a, targets_b))
predictions = classifier_layer(inputs)
loss = mixup_criterion(criterion, predictions, targets_a, targets_b, lam)
# 2nd phase: standard
# predictions = classifier_layer(contextual_embeddings)
# loss = criterion(predictions, labels_train)
lss += loss
loss.backward()
optimizer.step()
scheduler.step()Experimenting with this approach, the results obtained are very poor…Have any of you worked on an approximation similar to this one with successful/good results?Thanks. | 2020-07-15T13:44:46Z | [
{
"date": "2020-07-15T20:55:49Z",
"reply": "Hi@franborjavalero!This is really interesting. I remember@sguggergot a little bump using mixup after embeddings with ULMFiT. Would be really awesome to share this code as implementation for this is not trivial."
},
{
"date": "2020-07-15T21:08:00Z",
"reply": "It wasn’t for transformers, but ULMFiT. Didn’t get the chance to try it on transformers model.Also, I was using themanifold mixupversion, which applies the mixup at a random layer (not necessarily the embedding), though this could also mess up the attention mechanism in tansformers."
},
{
"date": "2020-07-15T21:16:17Z",
"reply": "Thanks for sharing@sgugger.Data augmentation for text classification really is a tough one. Is there anything you consider promising?@franborjavaleroyou might want to checkout thisthread"
},
{
"date": "2020-07-15T21:23:22Z",
"reply": "Haven’t found anything that really stands out for now, so no magic trick on my side"
},
{
"date": "2020-07-15T21:26:46Z",
"reply": "Syntactic Data Augmentation Increases Robustness to Inference Heuristicsdiscussed in the other thread seems interesting for NLI"
},
{
"date": "2020-07-21T18:20:25Z",
"reply": "You might find our work on Cost-Sensitivity to be of interest. We found it to be a good alternative to data augmentation. [Paper hereandCode here]"
}
] |
How does the vocabulary size count towards total parameter size of a model? | https://discuss.huggingface.co/t/how-does-the-vocabulary-size-count-towards-total-parameter-size-of-a-model/13833 | 0 | 2,191 | TL;DR The vocabulary size changes the number of parameters of the model. If we were to compare models with different vocabulary sizes, what would be the most fair strategy, fixing the total number of parameters or having the same architecture with same number of layers, attention heads, etc.?We have a set of mini models which are pretrained from scratch using the Roberta architecture. The number of layers, hidden sizes, and number of attention heads correspond to that of the mini models in the BERT paper. We wanted to experiment with the effect of different tokenization algorithms on the downstream performance and to this end, fit BPE, WordPiece, and WordLevel tokenizers with 50K, 50K, 100K vocabulary sizes respectively in addition to character-based tokenization. The reason for the increased vocabulary size for the WordLevel tokenization is to decrease the number of OOV tokens.Later did we notice that the difference between vocabulary sizes cause a huge difference between the number of parameters. The model sizes are 20.4M, 20.4M, 33.2M, and 8.1M for BPE, WordPiece, WordLevel, and char tokenizer-based models respectively. This means that the percentages of the number of parameters coming from the vocabulary of the model are 63%, 63%, 77%, and 1% for BPE, WordPiece, WordLevel, and char tokenizer-based models respectively.My question is, is it unfair to compare the downstream performance of these models on the same task with the same dataset just because the number of parameters are different. I would assume that in a given forward pass through an input, only a very small part of the vocabulary is updated. Because, a parameter in a layer of the transformer blocks of the model is updated every step, whereas a parameter in the vocabulary is updated whenever it appears in the input text. Therefore, to say that, for example, 100K parameters from the vocabulary of WordLevel tokenizer-based model contribute to the computation of an input is not true. This means that for as long as the number of parameters in the transformer blocks of the models are comparable, it is fair to compare the performance of the models. If this assumption is incorrect, I would be happy to be corrected.Thanks for your time. | 2022-01-18T07:28:48Z | [] |
Guide: The best way to calculate the perplexity of fixed-length models | https://discuss.huggingface.co/t/guide-the-best-way-to-calculate-the-perplexity-of-fixed-length-models/193 | 9 | 8,648 | Hey all. Just thought you might be interested in a page I just added to the research docs on theperplexity of fixed-length models.Perplexity (PPL) is defined as the exponential average of a sequence’s negative log likelihoods. For at-length sequenceX, this is defined,\text{PPL}(X) = \exp \left\{ -\frac{1}{t} \sum_i^t \log p_\theta (x_i|x_{<i}) \right\}But with fixed-length models (like most transformers), we can’t always condition on the entire preceding subsequence when predicting each token.The initial instinct for many in dealing with this problem is to break the whole sequence into segments equal to the model’s max input size and calculate the likelihoods of each segment independently. This not the best approach, however, since it gives the model very little context to use for prediction at the beginning of each segment. I’ll illustrate this with the following gif where we imagine a model with a max input size of 6 adding up the log-likelihoods for the sentence, “Hugging Face is a startup based in New York City and Paris”ppl_chunked1200×160 352 KBWhen the model starts the second segment, it has to try to predict the word “in” without any context, even though we have 5 words before it that the model could be using (since we said the max input size is 6).A better approach is to instead employ asliding windowstrategy, where you continually move the context across the sequence, allowing the model to take advantage of the available context.ppl_sliding1200×160 373 KBThis is slower to compute, but will typically yield better scores and is actually much closer to the way the sequence probabilities are formally decomposed (e.g. see the the equation above).In theguide, we show how to do this in a strided way with GPT-2. When using the first, naive approach, GPT-2 gets a PPL of19.64on WikiText-2. In contrast, when we use a strided sliding window, this score improves dramatically down to16.53. | 2020-07-10T17:07:49Z | [
{
"date": "2020-10-20T20:37:42Z",
"reply": "Hi, I have a question about the perplexity calculation from theguide.Why do we divide byiin the example, seeppl = torch.exp(torch.stack(lls).sum() / i)?If you have a codebase or paper that exemplifies this behaviour could you please share it?Thanks!"
},
{
"date": "2020-10-20T22:01:25Z",
"reply": "Hmm yes, you should actually divide byencodings.input_ids.size(1)sinceidoesn’t account for the length of the last stride.I also just spotted another bug. When the length of the last segment is less thanstride, thelog_likelihoodcalculation is slightly off. The difference in scores won’t be significant, but I’ve update the guide on master. This should be right:max_length = model.config.n_positions\nstride = 512\n\nlls = []\nfor i in tqdm(range(0, encodings.input_ids.size(1), stride)):\n begin_loc = max(i + stride - max_length, 0)\n end_loc = min(i + stride, encodings.input_ids.size(1))\n trg_len = end_loc - i # may be different from stride on last loop\n input_ids = encodings.input_ids[:,begin_loc:end_loc].to(device)\n target_ids = input_ids.clone()\n target_ids[:,:-trg_len] = -100\n\n with torch.no_grad():\n outputs = model(input_ids, labels=target_ids)\n log_likelihood = outputs[0] * trg_len\n\n lls.append(log_likelihood)\n\nppl = torch.exp(torch.stack(lls).sum() / end_loc)Does that answer your question?"
},
{
"date": "2020-10-21T14:02:18Z",
"reply": "yep thanks Joe!I was thinking something similar but wanted to check in case I was missing something"
},
{
"date": "2021-03-01T22:57:39Z",
"reply": "Hi@joeddav- the input_ids and target_ids are the same. Shouldn’t target_ids be shifted by one?"
},
{
"date": "2021-03-01T23:09:39Z",
"reply": "Nevermind - just found out that labels are shifted inside the model and the loss for last one gets ignored.huggingface.coOpenAI GPT2We’re on a journey to advance and democratize artificial intelligence through open source and open science.labels(torch.LongTensorof shape(batch_size, sequence_length), optional) – Labels for language modeling. Note that the labelsare shiftedinside the model, i.e. you can setlabels = input_idsIndices are selected in[-100, 0, ..., config.vocab_size]All labels set to-100are ignored (masked), the loss is only computed for labels in[0, ..., config.vocab_size]"
},
{
"date": "2021-07-15T19:44:29Z",
"reply": "@joeddavI read and read the page several times. Thank you!What would be the simplest way of accessing a perplexity score for a sentence and its parts? I’m building an application in NodeJS and hoping to access a perplexity score via an API - paid is fine for now. I think I could set up the Python model somewhere and expose it via an API but this hopefully will come later after some MVP testing.Thank you again!"
},
{
"date": "2021-10-16T20:01:19Z",
"reply": "I am wondering whether this is still correct. So what you do is, for all input sequences:neg_log_likelihood = outputs[0] * trg_lenYet the first output of causal LMs isCrossEntropyLoss, not NLLL. So from that you can just get the mean CE loss from all sequences and get the exponential.EDIT: that is also how it is implemented in the Trainer and run_clm.py script. First gather all losses for all batches in the whole validation set and take the mean.github.comhuggingface/transformers/blob/11c69b80452fae4b13c6d8bc22bdc19f3a752199/src/transformers/trainer.py#L2353-L2354if all_losses is not None:metrics[f\"{metric_key_prefix}_loss\"] = all_losses.mean().item()Then take the exponential.github.comhuggingface/transformers/blob/11c69b80452fae4b13c6d8bc22bdc19f3a752199/examples/pytorch/language-modeling/run_clm.py#L495# Evaluationif training_args.do_eval:logger.info(\"*** Evaluate ***\")metrics = trainer.evaluate()max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)metrics[\"eval_samples\"] = min(max_eval_samples, len(eval_dataset))try:perplexity = math.exp(metrics[\"eval_loss\"])except OverflowError:perplexity = float(\"inf\")metrics[\"perplexity\"] = perplexitytrainer.log_metrics(\"eval\", metrics)trainer.save_metrics(\"eval\", metrics)kwargs = {\"finetuned_from\": model_args.model_name_or_path, \"tasks\": \"text-generation\"}if data_args.dataset_name is not None:kwargs[\"dataset_tags\"] = data_args.dataset_name"
},
{
"date": "2021-11-26T15:26:25Z",
"reply": "I’d agree with@BramVanroy, any thoughts@joeddavon the above post?BramVanroy:Yet the first output of causal LMs isCrossEntropyLoss, not NLLL. So from that you can just get the mean CE loss from all sequences and get the exponential.I don’t understand the multiplication bytrg_lenin this example. Also on my dataset it explodes the perplexity by orders of magnitude above a uniform upper bound oflog(|Vocab Size|)"
},
{
"date": "2021-12-16T03:36:34Z",
"reply": "I think it is correct forPerplexity of fixed-length modelssince batch size is 1.B.T.W. most libraries like simpletransformers implement perplexity calculation by taking exp(sum_of_loss_in_all_batches / num_of_batch) likesimpletransformers/language_modeling_model.py at 254aaaa218635ef68f80ad1917403e7b7e24d710 · ThilinaRajapakse/simpletransformers · GitHub"
}
] |
Few shot automatic moderation | https://discuss.huggingface.co/t/few-shot-automatic-moderation/12102 | 0 | 661 | the facebook flies shows that there’s a lack of human based moderation on social networks. what about automatic moderation and how it cope with reduced datasets availability?I’m wondering what’s the actual research status on this subject. | 2021-11-20T14:59:40Z | [] |
Let's Make an Ethics Chat Bot that's Not Racist! | https://discuss.huggingface.co/t/lets-make-an-ethics-chat-bot-thats-not-racist/11905 | 0 | 713 | I am a philosopher and I have studied Ethics for over 20 years (check me here j.mp/joshtedx). I am disheartened to see a few recent attempts to make Ethics AIs have not turned out well (racist ethics AIs - Google Search)This should not happen. I am quite certain I can make an Ethics AI Few shot, Q and A example or cloze or knowledge base that is not racist for you, if you can make the chat bot part. I can also easily correct the answers from any large NLP model.Let’s show the world not all Ethics AIs will end up racist!I suggest doing this as a not-for profit project that others could even then use in their chat bots to correct for unethical answers as an out-of-the-box solution!If anyone is interested please LMK! Let’s make something good for humanity! | 2021-11-16T19:32:49Z | [] |
New Paper: Masked Autoencoders Are Scalable Vision Learners | https://discuss.huggingface.co/t/new-paper-masked-autoencoders-are-scalable-vision-learners/11673 | 0 | 1,341 | (Meta-comment: I’m actually not sure which forum this would best fit into - seems like it would be useful to have a place where we can discuss new papers.)This new work by Kaiming He et al seems pretty interesting - they use a very simple setup for masking during pre-training a ViT and it looks like they get very good results across a variety of tasks.So far, I see an implementation bylucidrains.arXiv.orgMasked Autoencoders Are Scalable Vision LearnersThis paper shows that masked autoencoders (MAE) are scalable self-supervised
learners for computer vision. Our MAE approach is simple: we mask random
patches of the input image and reconstruct the missing pixels. It is based on
two core designs.... | 2021-11-14T01:55:46Z | [] |
Improving performance of Wav2Vec2 fine tuning with word piece vocabulary | https://discuss.huggingface.co/t/improving-performance-of-wav2vec2-fine-tuning-with-word-piece-vocabulary/6292 | 5 | 2,856 | Hello,I’m fine tuning XLSR-Wav2Vec2 on a 200+ hours of a speech in a language not in the original pertaining.The training progresses nicely, however when it reaches about 40 WER it starts to overfit (WER doesn’t progress much and train loss decreases while eval loss is going up).I’ve tried increasing some params of the SpecAugment, but it only helped a bit.I’ve noticed that using the Speechbrain lib implementation I’m getting a bit better results (on the expense of training stability) and was wondering if it is due to a larger vocabulary they use there. Does anyone tried to use a tokenizer with a vocabulary that contains subwords and words in addition to characters? I could’t find any experiment that uses it with Huggingface transformers W2V2.I see in the Wav2Vec 2 paper they say that:We expect performance gains by switching to a seq2seq architecture and aword piece vocabulary.https://arxiv.org/pdf/2006.11477.pdfAny suggestions on how to do that with Huggingface Transformers?P.S. my dataset is noisy and not super clean.Any help or suggestion will be very helpful.Samuel | 2021-05-21T13:31:58Z | [
{
"date": "2021-05-26T07:07:21Z",
"reply": "Not sure how I’d switch to a seq2seq architecture, but for word piece, I think you just need to change the vocab passed to theWav2Vec2CTCTokenizer. Instead of the individual alphabet characters used for the vocab in the XLSR example, you’d need to use the wordpiece/BPE algorithm on your language text data and pass that through."
},
{
"date": "2021-05-28T13:49:02Z",
"reply": "Thanks for the answer!Any code examples or ideas on how to use word piece tokenizer easily? I understand I’ll need to basically override most of the functions in transformers/models/wav2vec2/tokenization_wav2vec2.py"
},
{
"date": "2021-06-03T07:00:39Z",
"reply": "you can look intosentencepiece.Hope that helps!"
},
{
"date": "2021-07-30T17:19:24Z",
"reply": "This can be accomplished by using theBertTokenizerand settingvocab_sizeto 30522. Keep in mind that you don’t want to use the existinglm_headweights in theWav2Vec2ForCTCcheckpoint though. I did this with the TensorFlow version, but I don’t think there is a vocab limit on the PyTorch ctc loss either."
},
{
"date": "2021-10-27T03:02:25Z",
"reply": "Thanks for the answer!I am also trying to implement this. Can I get any code examples for this? Thank you."
}
] |
[Help needed] Extending Trainer for Meta learning | https://discuss.huggingface.co/t/help-needed-extending-trainer-for-meta-learning/635 | 3 | 1,541 | I want to implement MAML with Glue dataset with transformers. In my case, query and support set will come from the same dataset. I’ve read some work in meta learning from HF team (Wolf et al., 18).Although I’ve implemented my training loop (withhigher) (open for other methods as well), I am still looking for a correct reference implementation of MAML or Reptile to confirm. Currently my code inherits fromTrainer. If anyone share a sample snippet that would perform MAML gradient updates, that’d be really helpful ? | 2020-08-08T11:31:51Z | [
{
"date": "2020-08-17T03:27:18Z",
"reply": "So theMetaDatasetwraps anyGlueDatasetto give a list containing all classes whenmeta_dataset[0]is called. So this will become,num_of_classes (N)way K shot example.I’ve written this, which extendsTrainerfor MAML.def train(self):\n\n self.create_optimizer_and_scheduler(\n int(\n len(self.train_dataloader)\n // self.args.gradient_accumulation_steps\n * self.args.num_train_epochs\n )\n )\n\n logger.info(\"***** Running training *****\")\n\n self.global_step = 0\n self.epoch = 0\n\n eval_step = [2 ** i for i in range(1, 20)]\n inner_optimizer = torch.optim.SGD(\n self.model.parameters(), lr=self.args.step_size\n )\n self.model.train()\n\n tqdm_iterator = tqdm(self.train_dataloader, desc=\"Batch Index\")\n\n # n_inner_iter = 5\n self.optimizer.zero_grad()\n query_dataloader = iter(self.train_dataloader)\n\n for batch_idx, meta_batch in enumerate(tqdm_iterator):\n target_batch = next(query_dataloader)\n outer_loss = 0.0\n # Loop through all classes\n for inputs, target_inputs in zip(meta_batch, target_batch):\n\n for k, v in inputs.items():\n inputs[k] = v.to(self.args.device)\n target_inputs[k] = v.to(self.args.device)\n\n with higher.innerloop_ctx(\n self.model, inner_optimizer, copy_initial_weights=False\n ) as (fmodel, diffopt):\n\n inner_loss = fmodel(**inputs)[0]\n diffopt.step(inner_loss)\n outer_loss += fmodel(**target_inputs)[0]\n\n self.global_step += 1\n self.optimizer.step()\n\n outer_loss.backward()\n\n if (batch_idx + 1) % self.args.gradient_accumulation_steps == 0:\n torch.nn.utils.clip_grad_norm_(\n self.model.parameters(), self.args.max_grad_norm\n )\n\n # Run evaluation on task list\n if self.global_step in eval_step:\n output = self.prediction_loop(self.eval_dataloader, description = \"Evaluation\")\n self.log(output.metrics)\n\n output_dir = os.path.join(\n self.args.output_dir, f\"{PREFIX_CHECKPOINT_DIR}-{self.global_step}\",\n )\n self.save_model(output_dir)"
},
{
"date": "2020-08-19T13:44:28Z",
"reply": "I’m not completely sure howhigherworks. If someone can provide a minimal example with bare Pytorch, that’d be helpful."
},
{
"date": "2021-10-19T15:23:49Z",
"reply": "Hey,@prajjwal1did you implemented this?"
}
] |
Detection Transformer (DETR) for text detection in documents | https://discuss.huggingface.co/t/detection-transformer-detr-for-text-detection-in-documents/10396 | 0 | 1,969 | Hi,i do currently some experiments on text detection with a transformer based model.Do anyone have experience at this or recommendations ?My idea is to train the DetrForObjectDetection on the COCOText-v2 datasetCOCOText-v2i have tested some setups:pretrained facebook/resnet-50 with num_queries=2000 (a good value for a A4 document page)from scratch with efficentNet_b0 backbone from timm with backbone lr: 0.001 and lr: 0.01but in all cases the loss and train loss stuck at ~1.7 after ~35 epochs with 2 val steps per epochanother problem i have faiced is the COCOevaluator there seems to be a problem with numpy has no append at validation step:in COCOeval:problem:self.eval_imgs[iou_type].append(eval_imgs)one sample from my train dataloader looks like this:# pixel_values 1 example
torch.Size([3, 640, 640])
# target for this example
{'boxes': tensor([[0.0810, 0.8323, 0.1621, 0.1356],
[0.3031, 0.3070, 0.0367, 0.0088],
[0.5304, 0.3418, 0.0349, 0.0102]]), 'class_labels': tensor([0, 0, 0]), 'image_id': tensor([367969]), 'area': tensor([5295.0200, 103.8200, 105.6000]), 'iscrowd': tensor([0, 0, 0]), 'orig_size': tensor([640, 556]), 'size': tensor([640, 556])}so the data after Dataloader seems to be oksome more code:COCO_stuff:adapted from:COCOTextPytorch COCODataloaderdef collate_fn(batch):
""" process on every sample in batch
"""
feature_extractor = DetrFeatureExtractor()
pixel_values = [item[0] for item in batch]
encoding = feature_extractor.pad_and_create_pixel_mask(pixel_values, return_tensors="pt")
labels = [item[1] for item in batch]
batch = dict()
batch['pixel_values'] = encoding['pixel_values']
batch['pixel_mask'] = encoding['pixel_mask']
batch['labels'] = labels
return batch
class CocoTextDataset(Dataset):
"""MSCOCO Text V2 Dataset
"""
def __init__(self, path, ann_file_name, image_folder_name, feature_extractor, is_train=True, data_limit=None):
self.path = path
self.annotation_path = os.path.join(path, ann_file_name)
self.image_folder_path = os.path.join(path, image_folder_name)
self.feature_extractor = feature_extractor
self.data_limit = data_limit
self.dataset_length = 0
self.coco_text = COCO_Text(annotation_file=self.annotation_path)
if is_train:
print('Load Training Data')
self.set_part = self.coco_text.train
else:
print('Load Validation Data')
self.set_part = self.coco_text.val
# create sets for train and validation
self.cleaned_img_to_ann_ids = {k:v for k,v in self.coco_text.imgToAnns.items() if v and k in self.set_part}
# sort out images and annotations, which are not readable or have uncorrect bound boxes
self.ann_ids = list()
self.image_ids = list()
for entry_id in self.cleaned_img_to_ann_ids.values():
annotations = self.coco_text.loadAnns(entry_id)
allowed_ann_ids = list()
allowed_image_ids = list()
for annotation in annotations:
if annotation['legibility'] == 'legible' and len(annotation['bbox']) == 4:
allowed_ann_ids.append(annotation['id'])
if annotation['image_id'] not in allowed_image_ids:
allowed_image_ids.append(annotation['image_id'])
# if image has no annotations, skip it
if allowed_image_ids and allowed_ann_ids:
self.image_ids.append(allowed_image_ids)
self.ann_ids.append(allowed_ann_ids)
if self.data_limit:
self.image_ids = self.image_ids[0:data_limit]
self.ann_ids = self.ann_ids[0:data_limit]
self.image_info = list()
self.ann_info = list()
for id in self.image_ids:
info = self.coco_text.loadImgs(id)
self.image_info.append(info)
for id in self.ann_ids:
info = self.coco_text.loadAnns(id)
self.ann_info.append(info)
if len(self.image_info) == len(self.ann_info):
print('Dataset created sucessfully')
self.dataset_length = len(self.image_info)
else:
print(f'Error: Number of images and annotations do not match. {len(self.image_info)} images and {len(self.ann_info)} annotations')
sys.exit(0)
def __len__(self):
return self.dataset_length
def __getitem__(self, index):
image_id = self.image_ids[index]
image_file = self.image_info[index]
annotations = self.ann_info[index]
image_path = os.path.join(self.image_folder_path, image_file[0]['file_name'])
image = Image.open(image_path).convert("RGB")
target = {'image_id': image_id[0], 'annotations': annotations}
encoding = self.feature_extractor(images=image, annotations=target, return_tensors="pt")
pixel_values = encoding["pixel_values"].squeeze() # remove batch dimension
target = encoding["labels"][0] # remove batch dimension
return pixel_values, target
class COCODatasetLoader(pl.LightningDataModule):
def __init__(self, path, ann_file_name, image_folder_name, feature_extractor, batch_size, worker, collator, data_limit=None):
super().__init__()
self.path = path
self.ann_file_name = ann_file_name
self.image_folder_name = image_folder_name
self.feature_extractor = feature_extractor
self.batch_size = batch_size
self.worker = worker
self.collator = collator
self.data_limit = data_limit
print(f'Data Limit is set to : {self.data_limit}')
def setup(self, stage=None):
self.train_dataset = CocoTextDataset(self.path, self.ann_file_name, self.image_folder_name, self.feature_extractor, is_train=True, data_limit=self.data_limit)
print(f'# of training samples: {self.train_dataset.dataset_length}')
self.val_dataset = CocoTextDataset(self.path, self.ann_file_name, self.image_folder_name, self.feature_extractor, is_train=False, data_limit=self.data_limit)
print(f'# of validation samples: {self.val_dataset.dataset_length}')
def visualize_example(self, index):
print(f'Visualize Example: {index}')
file_name = self.train_dataset.coco_text.loadImgs(self.train_dataset.image_ids[index])[0]['file_name']
path = os.path.join(self.train_dataset.image_folder_path, file_name)
annotations = self.train_dataset.coco_text.loadAnns(self.train_dataset.ann_ids[index])
print(f'{len(annotations)} boxes in image detected')
image = Image.open(path).convert("RGB")
draw = ImageDraw.Draw(image, "RGBA")
for annotation in annotations:
box = annotation['bbox']
x,y,w,h = tuple(box)
draw.rectangle((x,y,x+w,y+h), outline='red', width=1)
image.show()
def get_val_coco_text_dataset(self):
return self.val_dataset.coco_text
def train_dataloader(self):
return DataLoader(self.train_dataset, batch_size=self.batch_size, shuffle=False, num_workers=self.worker, pin_memory=True, collate_fn=self.collator)
def val_dataloader(self):
return DataLoader(self.val_dataset, batch_size=self.batch_size, shuffle=False, num_workers=self.worker, pin_memory=True, collate_fn=self.collator)Model:class TextDetectionModel(pl.LightningModule):
def __init__(self, lr, id2label, feature_extractor, coco_evaluator, sync):
super().__init__()
self.save_hyperparameters()
self.sync_dist = sync
self.lr = lr
self.id2label = id2label
self.feature_extractor = feature_extractor
self.coco_evaluator = coco_evaluator
self.num_classes = len(id2label)
self.model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50", num_queries=2000, encoder_layerdrop=0.2, decoder_layerdrop=0.2,
num_labels=self.num_classes, ignore_mismatched_sizes=True, return_dict=True)
def forward(self, pixel_values, pixel_mask=None, labels=None):
outputs = self.model(pixel_values=pixel_values, pixel_mask=pixel_mask, labels=labels, return_dict=True)
return outputs.loss, outputs.loss_dict, outputs.logits, outputs.pred_boxes
def training_step(self, batch, batch_idx):
pixel_values = batch["pixel_values"]
pixel_mask = batch["pixel_mask"]
labels = [{k: v.to(self.device) for k, v in t.items()} for t in batch["labels"]]
outputs = self.model(pixel_values=pixel_values, pixel_mask=pixel_mask, labels=labels)
loss = outputs[0]
loss_dict = outputs[1]
self.log("train_loss", loss.detach(), prog_bar=True, on_step=False, on_epoch=True, sync_dist=self.sync_dist)
for k,v in loss_dict.items():
self.log("train_" + k, v.item())
return loss
def validation_step(self, batch, batch_idx):
pixel_values = batch["pixel_values"]
pixel_mask = batch["pixel_mask"]
labels = [{k: v.to(self.device) for k, v in t.items()} for t in batch["labels"]]
bboxes = [entry['boxes'] for entry in labels]
outputs = self.model(pixel_values=pixel_values, pixel_mask=pixel_mask, labels=labels)
loss = outputs[0]
loss_dict = outputs[1]
logits = outputs[2]
# pred_boxes = outputs[3]
# compute averaged probability of each bbox
proba = torch.stack([x for x in logits.softmax(-1)[0, :, :-1]]).mean()
# compute COCO Output for each image
# orig_target_sizes = torch.stack([target["orig_size"] for target in labels], dim=0)
# results = self.feature_extractor.post_process(outputs, orig_target_sizes) # convert outputs of model to COCO api
# res = {target['image_id'].item(): output for target, output in zip(labels, results)}
# Coco Eval is broken currently
# self.coco_evaluator.update(res)
self.log("val_loss", loss.detach(), prog_bar=True, on_step=False, on_epoch=True, sync_dist=self.sync_dist)
self.log("val_bbox_proba", proba.detach(), prog_bar=True, on_step=False, on_epoch=True, sync_dist=self.sync_dist)
for k,v in loss_dict.items():
self.log("val_" + k, v.item())
return loss
#def validation_epoch_end(self, outputs):
# self.coco_evaluator.synchronize_between_processes()
# self.coco_evaluator.accumulate()
# self.coco_evaluator.summarize()
def predict_step(self, batch, batch_idx):
pixel_values = batch["pixel_values"]
outputs = self.model(pixel_values=pixel_values)
logits = outputs[2]
pred_boxes = outputs[3]
probas = logits.softmax(-1)[0, :, :-1]
return {'probas': probas, 'pred_boxes': pred_boxes}
def configure_optimizers(self):
param_dicts = [
{"params": [p for n, p in self.named_parameters() if "backbone" not in n and p.requires_grad]},
{
"params": [p for n, p in self.named_parameters() if "backbone" in n and p.requires_grad],
"lr": 1e-5, # this lr is used for backbone parameters
},
]
optimizer = AdamW(param_dicts, lr=self.lr, weight_decay=1e-4)
scheduler = ReduceLROnPlateau(optimizer, patience=2, verbose=True)
return {'optimizer': optimizer, 'lr_scheduler': scheduler, 'monitor': 'val_loss'}
def optimizer_zero_grad(self, epoch, batch_idx, optimizer, optimizer_idx):
optimizer.zero_grad(set_to_none=True)Trainerimport argparse
import os
import warnings
import time
import numpy as np
import onnx
import pytorch_lightning as pl
import torch
from onnxruntime.quantization import quantize_qat
from pytorch_lightning.callbacks import (EarlyStopping, LearningRateMonitor, ModelCheckpoint)
from pytorch_lightning.loggers import TensorBoardLogger
from transformers import DetrFeatureExtractor
from coco_tools.coco_torch_evaluator import CocoEvaluator
from dataloader import COCODatasetLoader, collate_fn
from model import TextDetectionModel
def __check_for_boolean_value(val):
"""argparse helper function
"""
if val.lower() == "true":
return True
else:
return False
if __name__ == '__main__':
warnings.filterwarnings("ignore")
pl.seed_everything(42, workers=True)
print('annotations file and image folder have to be in the same parent folder')
parser = argparse.ArgumentParser(description='Text Detection Trainer')
parser.add_argument("--path", help='path to generated images', type=str, required=False, default='/COCOText-v2') #set to true
parser.add_argument("--ann_file_name", help='name of annotations file', type=str, required=False, default='cocotext.v2.json')
parser.add_argument("--image_folder_name", help='name of image folder', type=str, required=False, default='train2014')
parser.add_argument("--epochs", help='how many epochs to train the model',type=int, required=False, default=250)
parser.add_argument("--batch_size", help='how big are a batch',type=int, required=False, default=8)
parser.add_argument("--data_limit", help='set a fixed data limit',type=int, required=False, default=0)
parser.add_argument("--worker", help='how many threads for the Dataloader',type=int, required=False, default=0)
parser.add_argument("--learning_rate", help='the learning rate for the optimizer',type=float, required=False, default=1e-4)
parser.add_argument("--gradient_clip", help='float for gradient clipping',type=float, required=False, default=0.1)
parser.add_argument("--visualize_random_example", help='if true show an example from train set',type=__check_for_boolean_value, required=False, default=False)
args = parser.parse_args()
path = args.path
ann_file_name = args.ann_file_name
image_folder_name = args.image_folder_name
epochs = args.epochs
batch_size = args.batch_size
data_limit = args.data_limit
worker = args.worker
learning_rate = args.learning_rate
gradient_clip = args.gradient_clip
visualize_random_example = args.visualize_random_example
if data_limit == 0:
data_limit = None
# resource handling
if torch.cuda.device_count() >= 1:
batch_size = int(batch_size / torch.cuda.device_count())
accelerator = 'ddp'
sync = True
else:
accelerator = None
sync = False
### Data Part
os.makedirs('text_detection_model_files', exist_ok=True)
feature_extractor = DetrFeatureExtractor(format="coco_detection", do_resize=False, do_normalize=True, image_mean=[0.485, 0.456, 0.406], image_std=[0.229, 0.224, 0.225])
feat_extractor_to_save = DetrFeatureExtractor.from_pretrained("facebook/detr-resnet-50", do_resize=True, size=600)
feat_extractor_to_save.save_pretrained('text_detection_model_files/transformer_model/')
print('feature extractor saved succesful')
data_module = COCODatasetLoader(path=path,
ann_file_name=ann_file_name,
image_folder_name=image_folder_name,
feature_extractor=feature_extractor,
batch_size=batch_size,
worker=worker,
collator=collate_fn,
data_limit=data_limit)
data_module.setup()
if visualize_random_example:
index = np.random.choice(len(data_module.train_dataset))
data_module.visualize_example(index)
train = data_module.train_dataloader()
val = data_module.val_dataloader()
coco_val_dataset = data_module.get_val_coco_text_dataset()
coco_evaluator = CocoEvaluator(coco_val_dataset, ['bbox'])
print('Coco Evaluator created')
### Model Part
id2label = {0: 'Text'} # we have only one class to detect: Text
text_detection_model = TextDetectionModel(lr=learning_rate, id2label=id2label, feature_extractor=feature_extractor, coco_evaluator=coco_evaluator, sync=sync)
### Callback Part
checkpoint_callback = ModelCheckpoint(
dirpath="text_detection_model_files/checkpoints",
filename="best-checkpoint",
save_top_k=1,
verbose=True,
monitor="val_loss",
mode="min"
)
logger = TensorBoardLogger(save_dir="text_detection_model_files/Lightning_logs", name="Text_Detection")
early_stopping_callback = EarlyStopping(
monitor="val_loss",
min_delta=0.001,
patience=15,
check_finite=True,
verbose=True
)
lr_monitor = LearningRateMonitor(logging_interval='epoch')
### Training Part
trainer = pl.Trainer(logger=logger,
weights_summary="full",
# only if gpu mem is overheaded -> needs much more train time
benchmark=True,
move_metrics_to_cpu=False,
val_check_interval=0.5,
gradient_clip_val=gradient_clip, # set to 0.5 to avoid exploding gradients
stochastic_weight_avg=True,
callbacks=[
checkpoint_callback,
early_stopping_callback,
lr_monitor
],
max_epochs=epochs,
gpus=torch.cuda.device_count(),
accelerator=accelerator,
precision=32, # dont change for model
accumulate_grad_batches=1, # optimizer step after every n batches -> better gpu mem usage / model specific
progress_bar_refresh_rate=20,
# profiler='pytorch', # only for debug
)
trainer.fit(text_detection_model, train, val)
time.sleep(2) # short delay
trained_model = text_detection_model.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
trained_model.eval()
trained_model.freeze()
### Saving Part
# ----------------------------------
# PyTorch Model - full
# ----------------------------------
try:
torch.save(trained_model, "text_detection_model_files/torch_text_detection_model.pt")
print('Torch model saved successful')
except Exception as e:
print('Cannot export as PyTorch Format -- Error : ' + str(e))
# ----------------------------------
# PyTorch Model - state dict
# ----------------------------------
try:
torch.save(trained_model.state_dict(), "text_detection_model_files/torch_text_detection_model_state_dict.pt")
print('Torch model state dict saved successful')
except Exception as e:
print('Cannot export as PyTorch Format with state dict -- Error : ' + str(e))
# ----------------------------------
# onnx
# ----------------------------------
try:
input_batch = next(iter(val))
input_sample = {
"pixel_values": input_batch["pixel_values"][0].unsqueeze(0),
}
values = input_sample['pixel_values']
file_path = "text_detection_model_files/torch_text_detection_model.onnx"
torch.onnx.export(trained_model, values, file_path,
input_names=['pixel_values'],
output_names=['logits', 'pred_boxes'],
dynamic_axes={'pixel_values': {0: 'batch_size', 1: 'channels', 2: 'width', 3: 'height'},
'logits': {0: 'batch_size'}, 'pred_boxes': {0: 'batch_size'}},
export_params=True, opset_version=11,
enable_onnx_checker=True, verbose=False)
print('Onnx model saved successful')
print('Start model quantization')
model_quant = "text_detection_model_files/torch_text_detection_model.quant.onnx"
quantized_model = quantize_qat(file_path, model_quant)
print('Quantization succesfull')
except Exception as e:
print('Cannot export as ONNX Format -- Error : ' + str(e))
# Predictions
model = text_detection_model.load_from_checkpoint(checkpoint_path=trainer.checkpoint_callback.best_model_path)
preds = trainer.predict(model, val, return_predictions=True)
print(preds)@nielsrdo you have any idea or recommendations ? ^^ | 2021-09-29T14:51:09Z | [] |
Summarization for downstream task | https://discuss.huggingface.co/t/summarization-for-downstream-task/10011 | 0 | 646 | Hi!I was wondering id anyone could point me to any work about summarization for a downstream task.For example, given an NLP pipeline, one might want to first summarize the input and then perform some tasks (eg keywords extraction, classification, etc).For very long input, a first summarization step makes the text more treatable. I know of groups / companies that do proceed in this way, in some cases.However, one might want to directly summarize the text with the downstream task in mind: for keyword extraction, this might mean to keep as many keywords as possible, for classification to keep interesting features etc.Is anyone aware of any research work in this direction? I have looked a bit and I did not find anything, but I would be surprised no previous work exists, so I am problably searching using the wrong keywords.Any idea in this direction would also be highly appreciated | 2021-09-15T08:56:59Z | [] |
[Call for participation] Interactive Grounded Language Understanding in a Collaborative Environment (IGLU) Competition@NeurIPS2021 | https://discuss.huggingface.co/t/call-for-participation-interactive-grounded-language-understanding-in-a-collaborative-environment-iglu-competition-neurips2021/9851 | 0 | 716 | Human intelligence has the remarkable ability to quickly adapt to new tasks and environments. Starting from a very young age, humans acquire new skills and learn how to solve new tasks either by imitating the behavior of others or by following provided natural language instructions. To facilitate research in this direction, we propose theNeurIPS IGLU competition: Interactive Grounded Language Understanding in a Collaborative Environment.The primary goal of the IGLU competition is to approach the problem of how to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment. Understanding the complexity of the challenge, we split it into sub-tasks to make it feasible for participants.This research challenge is naturally related, but not limited, to two fields of study: Natural Language Understanding and Generation (NLU/G) and Reinforcement Learning (RL). Therefore, the suggested challenge can bring two communities together to approach one of the important challenges in AI. Another important aspect of the challenge is the dedication to perform a human-in-the-loop evaluation as a final evaluation for the agents developed by contestants.The goal of our competition is to approach the following scientific challenge: How to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment? By the interactive agent we mean that the agent is able to follow the instructions correctly, is able to ask for clarification when needed, and is able to quickly adapt newly acquired skills, just like humans are able to do while collaboratively interacting with each other.Tasks and Application Scenarios:Given the current state of the field, our main research challenge might be too complex to suggest a reasonable end-to-end solution. Therefore, we split the problem into the following concrete research tasks:Architect Task:Given target structure, generate step instructions for the BuilderSubmission system:CodaLab - CompetitionBuilder Task:Given Architect-Builder conversation, build target structure:Submission system:https://competitions.codalab.org/competitions/33828Prizes:Architect Task:1st place - 5K $2nd place - 1.5k $3rd place - 500 $Builder Task:1st place - 5K $2nd place - 1.5k $3rd place - 500 $Timeline:July 26 – Stage 1 begins;(Tentative) October 15 – Stage 1 ends;October 22 – Stage 2 begins by deploying the top-3 performing agents for human evaluation;November 26 – The results of Stage 2 are posted, and the list of winning teams per task is released;December 6 – NeurIPS 2021 begins.Upcoming workshops:To make it even easier for you to onboard, we will arrange workshops per task:Architect Task on Sep 9, at 9 am PST: the link to sign up:IGLU - Workshop (iglu-contest.net)Builder Task on Sep 10, at 10 am PST: the link to sign up:IGLU - Workshop (iglu-contest.net)During the workshops, our team will walk you through setup, available baselines, and training environment (for Builder task). You will have a great opportunity to ask any questions, which we probably can answerGuest Lectures:If you have missed our guest lectures. Here are the links to recordings:IGLU - Guest Lecture by Marc (iglu-contest.net)IGLU - Guest Lecture by Jianwei (iglu-contest.net)For more frequent updates:follow us at Twitter@IgluContestThe news section at our website:IGLU (iglu-contest.net)For questions to organizers and mentors use the slack channel:Join IGLU on Slack | SlackRegister for the competition at CodaLab:CodaLab - Competitionhttps://competitions.codalab.org/competitions/33828 | 2021-09-09T14:41:39Z | [] |
Implementing a custom Attention Transformer | https://discuss.huggingface.co/t/implementing-a-custom-attention-transformer/9702 | 5 | 2,983 | Hello everyone, currently I am trying to implement a custom attention transformer, whose attention is given on Page No. 4 of thislink. They have used hugging face for the implementation, and I am not sure about how to go for approaching this problem, and how to use hugging face to implement custom attention. Can anybody guide me, about how to go about implementing this? Thanks, | 2021-09-03T04:54:27Z | [
{
"date": "2021-09-03T20:24:28Z",
"reply": "Hey@iakarshumy best guess is that the authors implemented DocFormer from scratch, so as far as I can tell you can’t do some clever subclassing of an existing model to tweak the attention layers.Having said that, you could look at the implementation ofLayoutLMV2which seems to share a similar approach and you can usethis templateto get all the basic modeling files.Do you know if AWS open-sourced the pretrained weights of DocFormer? Without them, you might need a lot of compute to build a useful model.Hope that helps!"
},
{
"date": "2021-09-04T03:02:44Z",
"reply": "Hey@lewtun, thanks a lot for sharing this, maybe then I would focus on implementing it from scratch, and learn from the implementation of LayoutLMV2, thanks a lot for that. And for the computation, I have some resources, which means NVIDIA DGX to work, and I am searching about the open-source Docformer code, but I am not getting it. I mailed the author and they refrained from sharing the code, so I don’t think that they have open-sourced it. Again, thanks a lot for replying."
},
{
"date": "2021-09-06T08:35:30Z",
"reply": "Hey@nielsrisDocFormercurrently on your roadmap fortransformers?@iakarshuis thinking about having a go at implementing and pretraining it (because the authors didn’t release code or weights), so I thought it would be good to double-check that you don’t do the same work twice"
},
{
"date": "2021-09-06T09:48:59Z",
"reply": "No it’s not on my list, seems interesting.However, if there are no pre-trained weights available (and even no code), then there’s a low chance for me to add it to the library."
},
{
"date": "2021-09-06T10:14:45Z",
"reply": "@nielsr@lewtunthanks a lot, then I would do it, and would ask the community if i get stucked, thanks a lot, I shall begin my coding then"
}
] |
Collaborative Training Experiment Round 2 with Yandex and HuggingFace | https://discuss.huggingface.co/t/collaborative-training-experiment-round-2-with-yandex-and-huggingface/9674 | 0 | 557 | Let’s train an even larger model together with Yandex, HuggingFaceand Neuropark!A few months ago we assembled to train a large SahajBERT. So let’s make it even larger!Join Neuropark’s discord community with this link -NeuroparkWe are about to start the training from- 2nd SeptemberThere will be a few new things to play with beside the 4x scale:sahajBERT 2.0 will start from sahajBERT 1.0 using Net2Net model expansionwe’ll try hybrid training with both GPU and TPU and see how they compareand bring along local GPU devices (see below)If you have a GPU desktop with ≥6GB memory and ≥30Mbit upload speed, we’d really appreciate if you can bring it to the collaborative run (and will help you with the setup). You can join and leave training at any time, even if it is only for a couple of hours.Also, we’d really appreciate your ideas on the training procedure:fine-tuning benchmarks that we should run: anything beside Wikiann and Soham News Category Classification?future training runs: we’ll be able to train the model in ~2 weeks. Is there any other task that you would like to pretrain a model for? What data should we use there?Let me know if you face any issues regarding joining or anything.Check our previous models onneuropark (Neuropark)Read the blog post about our previous collaborative training -Deep Learning over the Internet: Training Language Models Collaborativelypaper link -[2106.10207] Distributed Deep Learning in Open CollaborationsThanks to Yandex and Huggingface for this initiativelets train 4x ! | 2021-09-01T16:17:44Z | [] |
Tutorial / codebase for models interacting while training? | https://discuss.huggingface.co/t/tutorial-codebase-for-models-interacting-while-training/9554 | 0 | 490 | I need guidance on how to get started on a research project. I want to train two models (the particular architectures aren’t important) in tandem, with the ability to have the two models pass input tokens and output token between one another during training. Is there a tutorial or codebase with this functionality for me to get started? | 2021-08-29T00:44:06Z | [] |
10_000 samples & 10_000 labels | https://discuss.huggingface.co/t/10-000-samples-10-000-labels/8868 | 0 | 500 | Hey Community, I have a data set which each sample has it’s own label, for instance :I have 10000 sample which each sample has one word as label, and each label is unique for that sentence, this made 10000 training samples with 10000 labels.Is anyone here has an idea about how to do this or a toy code?Thank you so much for your help | 2021-07-31T09:57:12Z | [] |
Best way to infer continuously with Transformer? | https://discuss.huggingface.co/t/best-way-to-infer-continuously-with-transformer/8690 | 0 | 552 | Hi!I’m looking for ways to infer w/ a Transformer model in a continuous manner — basically, I want it to retain some information about the previous sample in case it was part of the same text segment.One approach I’m trying out now is inferring with intersecting windows (stride < length), and aggregating encoder embeddings of the overlapping part of the sequence (i.e. use information from window N to infer N+1). I use summing to aggregate instead of mean/dot product, as it gives the closest result to inferring as usual, but the result still doesn’t account for earlier context, meaning the approach doesn’t work.Has this problem been addressed already? Is the typical solution to just increase input length bound? (What if I don’t have enough compute to train a model with large input lengths?) | 2021-07-26T12:11:00Z | [] |
The (hidden) meaning behind the embedding of the padding token? | https://discuss.huggingface.co/t/the-hidden-meaning-behind-the-embedding-of-the-padding-token/3212 | 2 | 5,768 | So noticed that the transformers contain different embeddings for PAD tokens, and I know pad tokens typically are simply ignored for the most part (if at all present). However, as a forward pass using a batch typically contain dozens of padding tokens it would be interesting to see if these in fact hold any meaningful information (as padding tokens do attend to the sequence). Does anyone know of any research which has been conducted on what information might be present here?One might legitimately ask why this is relevant isn’t padding tokens simply a convenience for efficient processing because we need the same tensor shape? This is naturally correct, but quite a few studies have clustered the sentence embedding and it seems relevant to ask what influence the padding embeddings have on this.For a short demonstration that they indeed have different embeddings:import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained(
"bert-base-uncased")
model = transformers.BertModel.from_pretrained(
"bert-base-uncased")
input_ = tokenizer(["this is a sample sentence"], return_tensors="pt",
# add some padding
padding="max_length", max_length=128, truncation=True)
output = model(**input_)
# extract padding token embedding
pad_tok_id = [i for i, t in enumerate(input_["input_ids"][0]) if t == 0]
embedding_pad1 = output[0][0][pad_tok_id[0]]
embedding_pad2 = output[0][0][pad_tok_id[1]]
embedding_pad1.shape #embedding size
embedding_pad1[0:10]
embedding_pad2[0:10]tensor([-0.5072, -0.4916, -0.1021, -0.1485, -0.4096, 0.0536, -0.1111, 0.0525,
-0.0748, -0.4794], grad_fn=<SliceBackward>)
tensor([-0.6447, -0.5780, -0.1062, -0.1869, -0.3671, 0.0763, -0.0486, 0.0202,
-0.1334, -0.5716], grad_fn=<SliceBackward>) | 2021-01-15T09:22:01Z | [
{
"date": "2021-04-29T01:21:21Z",
"reply": "@KennethEnevoldsenI have been thinking about the same a while ago.You have a point with different embeddings for pad tokens. But, to my understanding these never interfere with any part of model’s computation (like, self attention), since the pad tokens are always masked using the attention masks.Would you have an example of where the pad token embeddings could make a difference, given the attention mask?"
},
{
"date": "2021-07-14T11:13:48Z",
"reply": "Hello,This discussion sounds interesting to me because I was thinking the same.Why there are different embedding vectors for PAD tokens.My use-case is a multi-label text classification where I am using a pretrained model in MaskedLanguageModeling as an “embedding layer”. More specific, I feed the input text [b,t] padded to the “embedding layer” and it outputs [b,t,f], where b is the batch_size, t is the length of the max sequence in the batch, f is the feature_number.After this I am using Attention to [b,t,f] and take a vector [b,1,f] which, after pass it from two linear layers and a sigmoid, gives the predictions.I check cosine similarity between embedding vectors of PAD tokens and it is almost between all over 0.7. Additionally, cosine similarity between words’ embedding vectors and PAD tokens’ vectors is almost between all under 0.3.Atenttion mechanism seems to assign negligible weights to PAD tokens embeddings vectors.In general it seems that these vectors are kind of ignored from the model. Furthermore, my results are pretty ok with respect to accuracy."
}
] |
Language model to search an answer in a huge collection of (unrelated) paragraphs | https://discuss.huggingface.co/t/language-model-to-search-an-answer-in-a-huge-collection-of-unrelated-paragraphs/2210 | 4 | 1,475 | I want to build a question/answer language model to search a large collection of paragraphs.Say 10k paragraphs. And find relevant answers in them.There are 2 issues I don’t know how to solve.existing solutions often identify an answer from a short paragraph. I don’t know how to deal with a lot of paragraphs. A naive approach would be going through each paragraph and identify an answer in each of them.existing solutions will generate an answer even when fed with an unrelated paragraph. they don’t give a confidence number. If I have 10k paragraphs to search an answer from, and only 3 paragraphs have an answer, using existing solutions won’t let me to rule out unrelated paragraphs.Is there a way to generate a document embedding first (using both a question and a paragraph ), and I can use the embedding to find candidate paragraphs first and then do the actual answer search. And when there is no answer, I’d like to get a confidence number that 's below my answer threshold.Are there any papers dealing with this problem? | 2020-11-25T18:59:48Z | [
{
"date": "2020-11-27T23:50:08Z",
"reply": "DPR & RAG may be the references you want.Regarding your questions and my answers with DPRhuggingface.coDPR — transformers 3.5.0 documentationDPR (retriever module) select top-k paragraphs from 20 million of possible wikipedia paragraphs (not just 10k, and you can also make your own corpus) using very fast MIPS (maximum inner product search) implemented by FAISSDPR (reader module) produce a relevance score for each of the top-k passages so this is a confidence number that you mentionedFinally, RAG is an improvement of DPR where (1) you can combine different passages directly (both relevance and irrelevance) to produce the final answer by “marginalization” and (2) Final answer is generated in free-form, not necessarily contained in any of the passage .(Please see the paper for detailshttps://huggingface.co/transformers/model_doc/rag.html)"
},
{
"date": "2021-07-02T06:32:54Z",
"reply": "Hi Jung & HF Community.I am implementing a RAG process,… with a daily update.I can easily merge the dataset objects using datasets.concatenate_datasets()but I have two questions:I cannot merge the indices… even if i .load_faiss_index() to each part the concat object has no indexIs this the best way to search a large corpus or would it be best to load each dataset into a seperate node and scan across a cluster?I am followingtransformers/use_own_knowledge_dataset.py at master · huggingface/transformers · GitHub, creating a new folder for each daily dataset."
},
{
"date": "2021-07-03T09:24:36Z",
"reply": "Hi@Berowne, it’s very interesting question.Daily updated datasets should be an important use case.Unfortunately, I have no answer. Maybe@lhoestqcould help us here?"
},
{
"date": "2021-07-06T12:40:00Z",
"reply": "Hi ! If you concatenate two datasets, you will need to build a new FAISS index for the new dataset.Depending on the number of documents you have and the type of index you use, you can either:rebuild a new index from scratch (easy, but slow for big datasets and advanced index types)or update one of the existing index with new vectors (useful if you need to add a few new documents for example into an already existing big dataset)or merge the two index together (possible only for certain index types,hereis an example for IVF)Regarding your second question, it is definitely a reasonable way to search a large corpus. Though it may also depend on your needs in terms of speed and accuracy, and on the size of your dataset."
}
] |
Address extraction and formated using Places API (Google Maps API) | https://discuss.huggingface.co/t/address-extraction-and-formated-using-places-api-google-maps-api/7998 | 0 | 1,668 | I am currently playing around with Places API from Google.I just have curious about the technique that they were using to make this happens (do only NER enough for this). I thought they firstly detect where is the address in my input, then parsing these into sub-level (like what I am gonna describe as below).When I input the text: "I wanna deliver this package to#KA B C "It gave me the result was super tremendous with 3 administrative_area_level, even it did format my input text (also correct their spelling/grammar mistake) to the address one. In detail, it somehow will be likestreet_name:{#k}
administrative_area_level_1: {A}
administrative_area_level_2:{B}
...
formated_address: #K, A, B, CGoogle DevelopersOverview | Places API | Google DevelopersProvide type-ahead predictions for text-based geographic searches, by returning places such as businesses, addresses and points of interest as a user types. | 2021-07-04T14:25:56Z | [] |
Finetuning for fp16 compatibility | https://discuss.huggingface.co/t/finetuning-for-fp16-compatibility/977 | 2 | 1,611 | t5 and pegasus don’t really work in fp16 because they create activations that overflow fp16 bits. (they were trained in bfloat 16 which has larger range) Has anyone read/seen/heard anything about finetuning/scaling models so that their activations can fit in fp16. (or generally to encourage smaller magnitude activations?I tried one experiment on google/pegasus-xsum where I finetune with summarization lm loss and add some additional losses based on the magnitude of hidden states, but I haven’t weighted them (the model instantly forgets how to summarize) so I’m looking around. | 2020-09-03T17:26:08Z | [
{
"date": "2021-06-07T10:05:31Z",
"reply": "It’s been a long time since this post, but maybe you remember if the problem with fp16 will appear when training the models from scratch (pretraining)?I’ve seen some NaNs already while training with fp16 on, but after lowering the learning rate, beginning of training looks reasonable."
},
{
"date": "2021-06-17T10:33:53Z",
"reply": "After 3 days of training with fp16 on NaN loss happened. Created issuePegasus pretraining in fp16 results in NaN loss · Issue #12225 · huggingface/transformers · GitHub, maybe someone knows how it can be fixed."
}
] |
What can transformers learn without position encoding? | https://discuss.huggingface.co/t/what-can-transformers-learn-without-position-encoding/6554 | 1 | 2,898 | So it obviously makes sense that attention mechanisms don’t have any inherent sense of position without encoding it explicitly, and for sequence prediction this seems critical. But, for example, word2vec via CBOW or skip gram is able to learn word embeddings without explicit position encoding. So my question is basically if we train a BERT model without the position encoding on the Masked LM task (something very similar to word2vec it seems to me), what is BERT capable of learning if anything? Would it be better than word2vec for creating word embeddings? | 2021-06-03T15:52:11Z | [
{
"date": "2021-06-10T08:18:11Z",
"reply": "My intuition would be that the transformers would still have a notion of context. It would still know this word appear in context with those other words, but would lose the notion of order loosely associated with position embeddings. Also, it would still allow word embeddings to change depending on the other words in context. So it would still be better than word2vec, which only has one embedding by word (learned as a combination of several contexts)."
}
] |
Project Description | https://discuss.huggingface.co/t/project-description/6444 | 1 | 362 | Hi@Madsyour project looks very interesting, would you mind adding a description?huggingface.coMads/wav2vec2-xlsr-large-53-kor-financial-engineering · Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science. | 2021-05-28T09:33:44Z | [
{
"date": "2021-05-29T04:25:31Z",
"reply": "Hi Snow, thank you for your interest!I will update in the coming week as soon as possible!"
}
] |
Does it make sense to generate sentences with Transofmrer's encoder? | https://discuss.huggingface.co/t/does-it-make-sense-to-generate-sentences-with-transofmrers-encoder/6311 | 0 | 376 | Quite a few vision+language papers pretrain BERT-based model with image-text data and finetune for image captioning task. But there is no decoder involved to generate sentences. Does that make sense? And what’s the main difference between using T’s encoder to do the sentence generation and do it with a T’ decoder? | 2021-05-22T15:26:15Z | [] |
PEGASUS model overfitting | https://discuss.huggingface.co/t/pegasus-model-overfitting/6246 | 2 | 460 | Hey everyone, I would like to see any scientific evidence regarding model overfitting available for thePEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarizationmodel.If anyone can point me to some resources or provide an answer, I’d greatly appreciate itThanks and stay safe | 2021-05-19T07:05:14Z | [
{
"date": "2021-05-19T12:01:41Z",
"reply": "het@theprincedripi don’t know the answer off the top of my head, but one place to start would be to check out the citations of the pegasus paper, e.g. viaGoogle Scholar"
},
{
"date": "2021-05-19T12:12:11Z",
"reply": "Thanks a lot, I’ll check it out"
}
] |
Classification Heads in BERT and DistilBERT for Sequence Classification | https://discuss.huggingface.co/t/classification-heads-in-bert-and-distilbert-for-sequence-classification/6146 | 2 | 1,089 | Hi,I have been using BertForSequenceClassification and DistilBertForSequenceClassification recently and I have noticed that they have different classification heads.BertForSequenceClassification has a dropout layer and a linear layer, whereas DistilBertForSequenceClassification has two linear layers and a dropout layer.Is there a particular reason for this?Thanks in advance! | 2021-05-12T16:16:52Z | [
{
"date": "2021-05-12T18:22:47Z",
"reply": "All in all, they have the same head: BertForSequenceClassification has a dropout layer and a linear layer but uses the pooler output, which went through a linear layer inside the BertModel.DistilBertModel has no pooler output however, so the first linear layer is there to replicate that."
},
{
"date": "2021-05-13T09:28:34Z",
"reply": "Thank you that makes sense!"
}
] |
Collaborative Training Experiment of an Albert Model for Bengali | https://discuss.huggingface.co/t/collaborative-training-experiment-of-an-albert-model-for-bengali/5991 | 1 | 1,293 | Huggingface is launching a collaborative training experiment of an Albert Model for Bengali language with our community. We are actively looking for participants who will help us to train the model.So what do you need in order to participate-A Google Colab accountThat’s everything you need.[Although if you want to use the power of your own GPUs, Huggingface will also provide a script for that.]How you can contribute?If you are a native Bengali speaker, that would be a great help, we are looking for participants who will check the performance of the tokenizer, sentence splitter, etc.You might want to help us preprocessing the dataset. We are using the Wikidump and OSCAR Bengali dataset to train the model, if you have some suggestions on preprocessing these feel free to contribute in that part.Now the main part, distributive training. You have been provided a google colab script in order to start the training and if your kernel crashes just restart the training script. (Non native speakers can participate)Join our discord community link-https://discord.gg/GD9G4j8fJU[A separate slack channel from Huggingface will be provided where you will get to know more about the distributive training framework and other related things.]We are aiming to start this collaborative training experiment from -May 7thPlease do participate in this first Huggingface collaborative training experiment specifically the native bengali speakers. | 2021-05-05T06:15:19Z | [
{
"date": "2021-05-06T09:43:18Z",
"reply": "Also I forgot to mention the main thing. Thanks to Yandex for creating this collaborative distributive training strategy. Without them this huge community training event would not be possible."
}
] |
Task-specific fine-tuning of GPT2 | https://discuss.huggingface.co/t/task-specific-fine-tuning-of-gpt2/5700 | 0 | 1,034 | Hi thereIn the Seq2Seq examples (transformers/examples/legacy/seq2seq at master · huggingface/transformers · GitHub) why there is no mention of GPT-x? it seems to me that, it shouldn’t be difficult to fine-tune this model usingGPT2LMHeadModelfor particular text-to-text tasks.Wondering if anyone has any thoughts on this.Thanks! | 2021-04-22T19:37:46Z | [] |
Is causal language modeling (CLM) vs masked language modeling (MLM) a common distinction in NLP research? | https://discuss.huggingface.co/t/is-causal-language-modeling-clm-vs-masked-language-modeling-mlm-a-common-distinction-in-nlp-research/5665 | 0 | 2,130 | Thehuggingface documentationstates:GPT and GPT-2 are fine-tuned using a causal language modeling (CLM) loss while BERT and RoBERTa are fine-tuned using a masked language modeling (MLM) loss.I have two questions regarding this statement:Is this a common distinction you’d find in the NLP literature (any literature on this distinction)?Is it a sensible distinction in your opinion? I have two questions While I totally agree with CLM, I don’t understand why you would call BERT & co. “masked language models”, since causal language models do the actual masking in next token prediction?Thanks! | 2021-04-21T14:30:19Z | [] |
Any ways to visualize attention of the LXMERT? | https://discuss.huggingface.co/t/any-ways-to-visualize-attention-of-the-lxmert/5579 | 0 | 493 | I would like to observe the attention between an input RoI and each word in an input sentence of LXMERT. If a framework that facilitates what I want do exists, please let me know. If not, could you tell me which of the tensors from LXMERT I should watch? | 2021-04-17T17:06:53Z | [] |
Human Evaluation and Statistical significance | https://discuss.huggingface.co/t/human-evaluation-and-statistical-significance/5374 | 0 | 408 | Hello,I have recently conducted a human evaluation of a chatbot via a survey. I wonder how I can prove that the results are statistically significant.More specifically, I compared the generated responses of two chatbots and calculated each one’s win rate. Moreover, participants were asked to rate each model according to “relevance” and “fluency” using a scale ranging from 1 to 5.According to some references (e.g.DodecaDialogue paper), they prove that the results are statistically significant using binomial testing.How can I apply binomial testing in the aforementioned case?@patrickvonplaten | 2021-04-08T20:21:45Z | [] |