sub
stringclasses
4 values
title
stringlengths
3
304
selftext
stringlengths
3
30k
upvote_ratio
float32
0.07
1
id
stringlengths
9
9
created_utc
float32
1.6B
1.65B
LanguageTechnology
improving seq2seq model
i'm using an encoder decoder seq2seq model for my chatbot and turns out it's not performing very well. (answering 15 questions right out of 20 questions) are there any ways i can improve the performance or accuracy of it? what i can think of right now is the dataset which i have only about 400 questions, but how much data is really enough though ? i read somewhere that increasing the amount of data for those with longer target sequence lengths may help it may also be because of the number of epochs or the word embedding used, will using glove/word2vec be better than keras' embedding layer? what else could be affecting the performance of the chatbot?
0.82
t3_qhp49v
1,635,433,600
LanguageTechnology
Deploy TFBert Model with SageMaker for word embeddings inference?
So I have trained a TFBert Model and made a script for getting the word embeddings from the trained model. I conducted the whole process on Google Colab, but now I am trying to move all the things to AWS- that is I can train and deploy the model to an endpoint for further backend functions. Did anyone use sagemaker before? I did not want to adapt to their training steps (I was using huggingface for the original training), but I just got stuck at wrapping a trained model and deploy that to an endpoint. Would be grateful if anyone can give me some tips on this. Thanks!
1
t3_qhovkg
1,635,432,960
LanguageTechnology
Glove Number of Parameters to Train
Hi guys pretty much in the title, but I want to figure out the number of parameters required to train my GloVE model. I have a vocab size of 95k and an embedding dimension of 100. Any help would be really appreciated :)
1
t3_qhle7v
1,635,422,336
LanguageTechnology
How does dictionary based sentimental analysis work?
How can we combine both machine learning approaches with the sentiment dictionaries for predict the severity level in a text? If anyone simplyfy the general workflow it would be really helpful for me. Thank you for your kind replies.
0.86
t3_qhhzmh
1,635,408,128
LanguageTechnology
Spanish course: Listening Comprehension + SPANISH ONLINE QUIZ
nan
0.5
t3_qhhqiu
1,635,406,976
LanguageTechnology
NER on non-sentence data
I have data being read from pdf's that is english text, more or less, like equipment details such as model numbers, manufacturer names and a ton of technical descriptive info for electrical installations. I am trying to extract specifically model numbers, manufacturers, etc and have attempted to naively do so through an NER model from spaCy. Prior to the NER model, we have some rule based thing, that does not work very well due to the many formats that this data can come in. Is there some better way of doing NER on non-sentence data - note that I do need labels on a word by word basis, not whole piece of text - than using some pretrained english model? I have tried using 'blank' english spacy models which performs even worse. Are there any ideal architectures in tensorflow or some other frameworks that would work better?
0.86
t3_qh9z7r
1,635,378,816
LanguageTechnology
What a Cognitive Linguist means by meaning and why it could impact research in #NLProc (an unpretentious unfinished reading list)
nan
1
t3_qgazie
1,635,267,840
LanguageTechnology
Looking for partners on a project related to AI and Gender Bias (from a developing country)
Hello everyone, I'm looking for an NLP researcher from a research project related to Bias and Artificial intelligence by The Feminist AI Research Network. The researcher has to be from a developing country. The term in the document is "Global South", which is confusing because it does NOT mean the southern hemisphere. It basically means developing countries. My email is [hashem.elassad@hotmail.com](mailto:hashem.elassad@hotmail.com), I can send you the document from there and my LinkedIn is hashem [https://www.linkedin.com/in/hashemelassad/](https://www.linkedin.com/in/hashemelassad/)
0.5
t3_qg8zh9
1,635,262,336
LanguageTechnology
Custom sentence embeddings by fine-tuning transformers
Hi all, I put together some videos and articles covering the fine-tuning methods used when creating sentence transformer models, which can be used to create dense vector representations of sentences/paragraphs. It starts with [fine-tuning on NLI data with softmax loss](https://www.pinecone.io/learn/train-sentence-transformers-softmax/), then the more recent, and effective [fine-tuning with multiple negatives ranking loss](https://www.pinecone.io/learn/fine-tune-sentence-transformers-mnr/). Both articles and videos look at the PyTorch implementation, then using `sentence-transformers`. It's surprisingly easy to fine-tune, and the results (particularly with the latter approach) are really good. I hope you find it useful! Let me know if you have any questions etc :)
0.94
t3_qg675e
1,635,254,272
LanguageTechnology
Recommendation on embedding method
Working on a text classification project, I’ve explored TFIDF and word2vec before for converting text to vectors. Need recommendations on best approach that has worked for you!
0.67
t3_qg5s32
1,635,252,864
LanguageTechnology
A Comprehensive Comparison of Word Embeddings in Event & Entity Coreference Resolution, (Accepted in Findings of EMNLP 2021)
Hello reddit, this is my first paper which has been accepted at Findings of EMNLP 2021. Words are made letters that cannot be understood by AI as is. Thus, word embeddings are tools used to encode a vocabulary of words into a mathematical space which allows deep learning models to ingest textual data. To date, many word embeddings methods exist with various characteristics. Hence, this paper studies how the various kind and various combinations of these embeddings perform. Additionally, I found that while there exist various kind of embeddings which have been trained differently, combining them does not greatly improve performance. This has a few consequence such as the fact that word embeddings are better compared when used alone instead of alongside others otherwise their difference in performance is overshadowed by the performance already provided by other embeddings in the system. [https://arxiv.org/abs/2110.05115](https://arxiv.org/abs/2110.05115)
0.9
t3_qg5gul
1,635,251,840
LanguageTechnology
How to rate quotes and sentences
Hi, ​ I am building a project where we want to return quotes for a user input like "today was a funny day". ​ How would you approach to give a rating for the quotes and sentences to actually propose a quote which fits? Right now we only work with standard sentiment which is not accurate at all.
1
t3_qfp1xd
1,635,192,192
LanguageTechnology
Issues encoding label column for deep learning
Hi, I was wondering if anyone could provide any help? I am carrying out a comparison binary classification of Twitter sentiment model using various models; some sci kit learn ones and a few deep learning / transformer models. My models run fine for my transformer models and sci kit learn models, however, my LSTM was producing terrible results. When using get\_dummies() to encode my label column, it was producing a single dimension array of shape (5825, ). When I changed get dummies to produce a two dimensional (5825,2) so the output is more like \[0,1\] , my model began to run well (with a two neuron output instead of one). Ideally, I'd like to have a single neuron. I've looked online for solutions but can see anyone having a similar issue, could anyone advise at all?
1
t3_qfmb9t
1,635,184,256
LanguageTechnology
How to get a sentiment analysis 'overall score'
Hi! I'm currently working on an application that essentially runs sentiment analysis on tweets by users, using Microsoft Azure text analytics. Whenever I send a tweet to the API, the following is returned. sentiment: (positive, negative or neutral) and the confidence scores, so e.g. negative: 0.03 neutral: 0.01 positive: 0.96 I'm looking to calculate an overall sentiment score, which is essentially an average of all sentiment messages by that user, from 0-100% - 100% being very positive and 0% being negative. What I was thinking of is potentially just having a ranking, so e.g. each message will be ranked by: Positive = 1 Neutral = 0 Negative = -1 and then just calculating the average, multiple by 100 and then receive a percentage? Appreciate any advice, Thanks!
1
t3_qffszl
1,635,165,568
LanguageTechnology
Linguistics for the Age of AI (open access)
nan
0.97
t3_qfc89u
1,635,150,848
LanguageTechnology
Fully funded PhD position in speech tech in the Netherlands
nan
0.8
t3_qfbwz8
1,635,149,312
LanguageTechnology
Using Huggingface Transformers with ML.NET
nan
1
t3_qfbqpt
1,635,148,544
LanguageTechnology
Why is there not much research into flow models for text?
Hello. We've seen a lot of work on text VAEs and text GANs. I've yet to see a comprehensive exploration into the only remaining one of "big-three" generative models: flow-based models. Could you provide some insight into why flow-based text models are not explored much?
0.5
t3_qfaec1
1,635,142,272
LanguageTechnology
NLP for Semantic Similarities
Need some guidance and directions. I'm very new to NLP - have used spaCy previously to perform sentiment analysis but nothing more. My work recently requires me to build a proof-of-concept model to extract the 10 most occurring concepts in a written essay of an academic nature, and the 10 most related concepts for each of the initial 10. To update my knowledge, I've familiarised myself further with spaCy. In doing so, I also came across Hugging Face and transformers. I realised that using contextual word embeddings might be more worthwhile since I am interested in meanings. So, I would like to be able to differentiate between "river bank" and "investment bank". 1) I would like to ask if Hugging Face will allow me to analyse a document and extract the most occurring concepts in the document, as well as most related concepts in the document given a specified concept. I would prefer to use an appropriate pre-trained model if possible as I don't have sufficient data currently. 2) My approach would be to get the most occurring noun phrases in a document, and then get noun phrases with the most similarities. Is this approach correct or is there something more appropriate? 3) spaCy does not seem to allow you to get words most similar to a specified word unlike Gensim's `word2vec.wv.most_similar`. Is there an equivalent or something in Hugging Face I can use? Would really appreciate some guidance and directions here for someone new to NLP. Thank you.
1
t3_qf8paf
1,635,134,976
LanguageTechnology
NLP + documenting endangered +/ extinct languages?
I'm really sorry if this is vague, but I wanted to write about NLP used for documenting endangered and/or extinct languages... for anyone experienced in NLP, what would that look like?
1
t3_qf3uyp
1,635,117,952
LanguageTechnology
Multitask Prompted Training Enables Zero-shot Task Generalization (Explained)
nan
0.67
t3_qf3j4d
1,635,116,800
LanguageTechnology
How does a Chatbot use NLP?
Hey friends, I need some help. So for my final year project for college, I have been tasked with creating a chatbot, that is able to answer a series of questions with relative accuracy. While I do have some average knowledge in python; I wasn't too sure where to begin. I looked it up on the internet, and most sources tell me I need to implement 'Natural Language Processing'. I was hoping I would be able to get answers regarding what purposes NLP would serve in a chatbot, and how exactly should I go around the implementation.
1
t3_qeomnf
1,635,066,880
LanguageTechnology
How to extract information from documents with structures
So let’s say you have 500 different companies that are your suppliers, and each one of those companies sends you 200 invoices. Now company A always uses its same invoice structure, and company B also uses its same invoice structure, etc. So each company has their different way of designing their invoices. But all of them have common features: list of products, total price, total VAT, etc. My objective is to develop on Python (sort of beginner with NLP!) a model that standardises all the information into a structured XML. Any guidance would really be appreciated :)
1
t3_qehfvm
1,635,036,416
LanguageTechnology
Leveraging Out-of-domain Data to Improve Punctuation Restoration via Text Similarity
nan
0.83
t3_qe8ori
1,635,008,256
LanguageTechnology
I increased the iterations in gensim LDA and the topics came out worse
I ran an LDA with 100 passes and 1 iteration and the results were pretty much ok. I increased the iterations to 100 thinking it would improve but the coherence decreased and also the topics were more similar to each other (also there was very little difference in computing time between the two runs, 1h44 Vs 1h53). What could be the reason behind this?
1
t3_qe00hr
1,634,973,312
LanguageTechnology
Tool for simple sentence rewriting
Hello! I am looking to either create or find a tool that can do some simple sentence rewriting. In particular, I'd like to take a handful of 1-sentence descriptions of services (e.g. "Understanding existing layouts and diagnosing layout issues.") and make them more consistently phrased / follow a consistent tone(?) - namely I want them all to be action-phrases instead of descriptions. I'm a Python dev and have done a little bit of NLP a long time ago - I feel like there have to be a handful of either simple NLP libraries that can identify parts of speech which are being used (to help humans do the rewriting) or even better, some ML model like GPT-3 which can just rewrite the sentences following a consistent style. Any recommendations for libraries, services, or apps which could help would be appreciated! I have a Grammarly Pro subscription - not sure if they have an API or interface which could help with this?
1
t3_qds2tv
1,634,942,208
LanguageTechnology
[Python] Best Python NLP library to segment run-on and list-like sentences
Hi everyone! I am completely new to NLP and new to Python so I'm feeling a bit overwhelmed by the number of choices at the moment. I need a library that will allow me to take product titles such as these: 1. 50/100pcs Kraft Paper Bag Gift Bags Packaging Biscuit Candy Food Cookie Bread Seen Snacks Baking Takeaway Bags 2. Wholesale 2019 New Fashion 3D Mitsubishi Hat Cap Car logo MOTO GP Racing F1 Baseball Cap Hat Adjustable Casual Trucket Hat and run them through some function that will spit out something like this with added commas: 1. 50/100pcs Kraft Paper Bag, Gift Bags, Packaging, Biscuit, Candy, Food, Cookie, Bread, Seen Snacks, Baking, Takeaway Bags 2. Wholesale 2019, New Fashion, 3D Mitsubishi Hat, Cap, Car logo, MOTO GP Racing, F1, Baseball Cap, Hat, Adjustable, Casual, Trucket Hat So it's very close to segmenting a paragraph into sentences but not quite. I need something that, ideally, already has a good dictionary and, mandatorily, provides support for both English and Portuguese. The more languages, the better. What do you recommend? What specific functions in the recommended libraries should I look into? I have already checked out spacy and it's dictionary was pretty good. Is it the best option? What specific functions would I use for this? Would I need to create one of my own based on grammar? Thanks a lot! **EDIT:** Another thing I'd like is a way to detect sections in product titles containing brand and model names. For example: **Vgate Icar2 Obd2 Scanner ELM327 BT ELM 327 V2.1 Obd 2 Wifi Icar 2** Auto Diagnostic Tool For Android/Pc/Ios Code Reader The first part of this sentence, which I bolded for emphasis is basically just the brand and model numbers. Is there a ready-made solution I could use to I automatically detect and segment these, perhaps based on the presence of numbers, abbreviations and unknown words?
1
t3_qdxcac
1,634,961,536
LanguageTechnology
Google Pixel 6 Tensor SoC for developing NLP applications?
The new Google Pixel 6 and Pixel 6 Pro use a new "Tensor" SoC that has support for ML. I'm getting one, and since I'm interested in NLP, I was wondering.. would the hardware configuration make a significant difference for using this with NLP applications ? Or possibly even for developing NLP applications on the phone? The Pixel 6 has 8GB RAM and 128 or 256GB storage. The Pixel 6 Pro has 12 GB RAM 128/256/512 GB storage. Just wondering if spending the extra $$ for better specs might possibly pay off with ML/NLP, or if it wouldn't make much of a difference.
1
t3_qdwefu
1,634,957,824
LanguageTechnology
more questions on text preprocessing for seq2seq models
i am doing a chatbot and i have a few more questions on text preprocessing for seq2seq models, hoping some of y'all know an answer to these so i need a large dataset but i'm doing a closed domain one and my current dataset is too small (about 300 questions), how many more questions should i add to make my dataset bigger? secondly, what should the threshold value (word occurrences) be? if i were to put it as 5, does that mean i have to add more questions for questions with words that did not appear more than 5 times as they will be removed and a wrong response may be given? lastly, questions and answers that are too long or too short have to be removed under preprocessing but most of the questions and answers i have in my dataset are very long. should i shorten them or give a bigger max length value in the codes?
0.76
t3_qdtp75
1,634,947,712
LanguageTechnology
How do I fine-tune zero shot models?
I want to fine tune a deberta model from huggingface .my objective is zero shot text classification as I donot know the classes. How do I go about doing this? Would really appreciate some sample code as well.
0.75
t3_qdp67x
1,634,933,376
LanguageTechnology
NLP Unsupervised
Hi, I am working on NLP unsupervised problem. Problem statement is to identify the emotions behind each review. But Data is not labelled , I have tried to label it using TextBlob but I am not sure on what should be the Threshold to label the data into Worry,Sad, frustrated,anger etc. Can you suggest me any different ways to label it?
0.75
t3_qdjlls
1,634,917,248
LanguageTechnology
Aprender Aleman: Declinaciones de AKKUSATIV Test Práctico
nan
0.29
t3_qdjdlq
1,634,916,608
LanguageTechnology
text preprocessing for seq2seq
i noticed that text cleaning is needed for preprocessing in seq2seq. since i'm doing a chatbot, is it possible to clean ONLY the questions and not the answers? because if the answera are cleaned, it will affect the response given to tbe user (e.g. lost punctuations, no whitespaces etc.)
0.91
t3_qdeuqe
1,634,902,016
LanguageTechnology
How to improve LDA topics convergence through passes and iterations
Hello! I have a corpus of 33535 documents (SAP community posts about sap cloud platform) made up of 9120 unique tokens. I am trying to extract 15 topics as after running a model with 1 iteration and 25 passes for every number of topics between 10 and 25, 15 had the highest coherence. Until now I have tried various combinations of iterations and passes (up to 100 iterations and 100 passes) but I am still not happy with the quality of the topics as they are still pretty similar and I cannot really understand how topics differ between each other. How could I improve my results?
1
t3_qcysgm
1,634,842,752
LanguageTechnology
Updating / Editing vocab.txt for BERT finetuning
I am using Huggingface transformers for finetuning a simple classification task. However, I want to update the vocab.txt that comes with standard BERT checkpoint files, with some of the words that are frequent in my training corpus. When I added these words in place of the 'Unusedx' tokens in the 'vocab.txt' , still it was not tokenising the added words. Can anyone guide me as to with the steps to do it ?
1
t3_qcuamx
1,634,829,952
LanguageTechnology
T5 text-classification on colab
Hi Reddit, I wrote a [blog](https://pedrormarques.wordpress.com/2021/10/21/fine-tuning-a-t5-text-classification-model-on-colab/) post and tutorial on how to fine tune a T5 model on colab using free tier resources. Hope someone finds it useful.
1
t3_qcu9l4
1,634,829,824
LanguageTechnology
The power of constrained language models.
nan
0.96
t3_qcrnq1
1,634,822,400
LanguageTechnology
I need help designing a text-to-pictogram system
Hello community. I got my first job at NLP and I need your help. My task is to design an algorithm that receives text as input in the form of medical indications and outputs a series of pictograms that represent the text (text-to-pictogram system). I would appreciate any kind of indications, like what kind of task should I solve. Or some route to follow so as not to waste time. Thank you
0.81
t3_qcqpd3
1,634,819,456
LanguageTechnology
Learn Spanish Online: Spanish Pronouns + SPANISH ONLINE TEST
nan
1
t3_qcfgck
1,634,777,216
LanguageTechnology
Illustrated intro to sentence transformers
Another [illustrated guide](https://www.pinecone.io/learn/sentence-embeddings/), this time introducing sentence embeddings with transformers (aka sentence transformers) - an awesome topic I'm excited to write more about, but for now introducing sentence embeddings and transformers, which we can use in cool applications like semantic search or topic modeling. Hope you enjoy, feel free to ask me any questions, give feedback etc - thanks all!
0.99
t3_qc6b0c
1,634,751,360
LanguageTechnology
[D]Need some perspective on data tagging for NER.
Hello reddit peeps. I am using the common BIO tagging method to tag words in a sentence. I have structured my data in two lists list a contains the sentence that needs to be tagged listA --> \[text\] and listB is a list of words contained within the sentence that needs to be tagged listB--->\[worda, wordb, wordc,....etc\]. Now i have looked for open source solutions but none seem to quite work, so i wrote my own and it works fine for English language but not for Spanish or other languages. (DM will send the gist link) Does anyone know how to solve this????
1
t3_qbxiqm
1,634,724,608
LanguageTechnology
New to programming - would like to make an android app that counts syllables from natural speech
Hi all, I am a speech and language pathology student and I would like to make an app that counts a client's syllables. I am new to programming and I've realized that there is a lot to know in terms of the front end, back end, and natural language processing as well. So my simple app idea is not so simple anymore ( I was naive, maybe still am). If you were in my position, how would you tackle this challenge? which coding languages would you learn? I am not looking to become an employed programmer but more so as a hobby. Thank you and any words of advice would be highly appreciated
0.78
t3_qbxcn2
1,634,723,968
LanguageTechnology
Help webscraping ACM Library (pull infomration that's not initially on the site)
nan
0.8
t3_qbjukd
1,634,674,304
LanguageTechnology
Looking for the right framework to implement an enterprise document management and analysis system
I have already spent quite some time with literature reviews and Google searches, but I didn't find anything suitable, yet. The task is to implement a flexible and scalable enterprise document management and analysis system. I guess that represents a prototypical use case for many businesses. The perfect framework would allow on premises operation (only Azure would be an option) and provide a low-code platform that allows to receive, tag and register documents (PDFs, Word and Excel files, other text files), indexing and smart search within and across documents and document collections, plus an interface to implement NLP tasks with Python. Moreover, it would be benefitial, if this framework also would allow to model meta data about documents and about the business processes they are embedded in (for example, to check and verify completeness of a set of necessary documents, before further processing gets triggered). I thought about a combination of Elastic Search and a NoSql Database like Cassandra, but that would not fit the low-code requirement. You might call me naive, but I supposed that there ought to be trillions of such frameworks, as this is such a typical use case in terms of business automation. But I did not find the right framework, yet. I hope someone can provide hints. Summary: A document management and analysis framework that features: * Enterprise-ready (on premises or compatible with Microsoft Azure) * Low-code framework * Large-scale document management and analysis * Modular and extensible via Python and NLP models * Connectable to business logics (i.e. checks for completeness of document collections) * Allowing for meta data and smart search within and across documents
1
t3_qbdsje
1,634,657,024
LanguageTechnology
Categorising into topics and sub-topics
Hello Reddit! I am currently starting on my way into NLP topics and am trying to create the following application. 1.) I would like to employ python libraries in order to read a document and sort it into one of three PREDETERMINED groups. 2.) I would then like to pull out parts of the text and check if they fall into one of the 6 different PREDETERMINED sub-groups 3.) I would like to extract the section of the sub-text that falls into each group out of the original text For example, the document that was uploaded was in the food industry and was talking about vegetables. 1.) Foods 2.)Vegetables 3.) "these potatoes are good for frying since they have..." I was Looking into the LDA topic, but that creates its own groups... All ideas and tips are highly appreciated!!!
0.93
t3_qbbfdt
1,634,649,728
LanguageTechnology
How to create a dataset for training NER models when you only have entity data
We have a list of entities in text files separated with a new line. We intend to train the [flair](https://github.com/flairNLP/flair) model to detect these entities in text, but NER models require the entity to be labeled in a paragraph with BOI format. One thing that comes to my mind is to create a random paragraph and inject entities at a random position in the paragraph but I am not sure how will it perform. Can anyone share their thoughts on this ?
0.85
t3_qb5r8u
1,634,625,536
LanguageTechnology
word2vec chatbot
should i use a pretrained word2vec model or train the word2vec model using my own corpus? if i were to train a custom word2vec model how do i do it?
0.5
t3_qb2dqv
1,634,612,096
LanguageTechnology
A New, Cheap, and Accurate Transformer Model
Hey Reddit :) We are currently two young entrepreneurs developing a customizable, in-house Transformer model that could reduce standard computation costs for models like GPT-3 by up to 50% without sacrificing quality or accuracy. As this is a largely untapped market, we want to get the community's feedback on how they would use this service or if people would even care. It would be much appreciated we could have a discussion around what you'd hypothetically use it for (either by comment or DM). Who knows, we might even throw in some free product keys in the near future. Just for idea generation purposes: some possible use cases would be: * Document Classification (e.g. sort research papers by category) * Sequence Tagging (e.g. summarization of news articles) * Named Entity Recognition (e.g. sorting customer reviews) * Question Answering (e.g. chatbots) * Natural Language Generation (e.g. automatically creating advertising material) * Data Exploration (e.g. automatically analyzing all of a company's contracts)
0.66
t3_qapsuf
1,634,573,312
LanguageTechnology
BigScience's first paper, T0: Multitask Prompted Training Enables Zero-Shot Task Generalization
The first modeling paper out of BigScience ([https://bigscience.huggingface.co/](https://bigscience.huggingface.co/)) is here! T0 shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks while being 16x smaller! A very big collection of prompts (\~2'000 prompts for 170+ datasets) was released ([https://github.com/bigscience-workshop/promptsource](https://github.com/bigscience-workshop/promptsource)) along with the model and the paper. This was an international collaborative effort, with over 40 people across more than 25 organizations. The group included dedicated researchers and engineers from different universities, companies, and think tanks. Model: [https://huggingface.co/bigscience/T0pp](https://huggingface.co/bigscience/T0pp) Repo: [https://github.com/bigscience-workshop/promptsource](https://github.com/bigscience-workshop/promptsource) Paper: [https://arxiv.org/abs/2110.08207](https://arxiv.org/abs/2110.08207) Additionally, the T0 models were released in the Hugging Face Model Hub and you can try it out in your browser here: [https://huggingface.co/bigscience/T0pp](https://t.co/QvEaqkfmgk?amp=1)
0.96
t3_qanhik
1,634,566,272
LanguageTechnology
Need answer on approach for chatbot development
Working on a side project which requires development of chatbot that can have general conversation with employees and ask survey questions relatable at the point of conversation to get key metrics on employee mental health. Any idea how should I proceed here ?
1
t3_qak57t
1,634,553,984
LanguageTechnology
General Questions about Higher Ed / Jobs in NLP
I have a lot of decisions to make about my career, and have been getting conflicting advice from people in my personal life. My friends and family are some very smart people, but they also do not know much about the specifics of NLP/Comp Ling industry, so I figured it would make sense to ask for some advice from people who know the industry well. I graduated this June with a Bachelors in linguistics, and a minor in computer science. I think I have a pretty strong background in linguistic theory, and a decent background in the theory of computer science, but my practical programming skills are pretty rusty (I had to take a voluntary leave from school from 2019-2021 for family illness, and haven’t programmed much in the past few years) and I haven’t had much experience with the actual cutting edge implementations of NLP or Comp Ling. Also, for NLP, I have taken one statistics course, but I think my math/stats background is not so great for getting into the ML aspect of NLP. Right now I have two main decisions I need to make: 1. Do I apply for a Masters program in NLP/Comp Ling this cycle (for starting Fall 2022)? I have seen lots of job postings in the field which want a higher degree, which is not the norm for general programming positions. It seems like having a Masters would be a significant benefit for finding a good job in the field, is this true in your experience? I think it would also have the practical benefit of helping me brush up on my programming skills, and bringing together my theoretical linguistics and CS knowledge and helping me learn how to apply that to actual practical industry problems. 2. Do I put all my focus on programming practice and interview prep now to land a job in the field as soon as possible, or do I get a part time job in the meantime doing something like being a barista? Getting a part time job doing something unrelated would obviously reduce my available free time to work towards getting a job in the industry, and there are other considerations that don’t factor into my main question here, such as paying for health insurance, etc. The real thrust of it is the question of whether I should put serious effort into getting a job in the industry during the year before I go to do a Masters, if I do in fact choose to pursue a masters. Sorry for coming in here with such an open ended set of questions, I just don’t know where else to ask people who know about the field. I really appreciate any response, or other information you think might be helpful!
0.72
t3_qa3ixg
1,634,492,288
LanguageTechnology
I have a problem in Arabic that I have no idea how to start solving.
Hello, so currently I'm looking at a problem that ~~AFAIK has never been done before.~~(See Edit). In Arabic we have something called "Tashkeel", when you see a word in usual text like: "علم" It's actually lacking vowels. When you add the vowels (Tashkeel) it becomes: "عَلَمٌ". As you might've expected, the problem I'm trying to solve is to add the Tashkeel to a text. This however isn't a simple table lookup, and a framework that will learn Tashkeel will implicitly learn most of the Arabic grammar and all its morphemes! The Tashkeel depends on two things: Position in the sentence and context of the text. **Position in the sentence** : If I say "العَلمُ جميل" (The flag is beautiful). Notice how the Tashkeel of "العلم" at the end is a small "و" (Pronounced Alam**ou**), this is the default Tashkeel i.e: Default form of the word. However, if I say "وققت للعَلمِ" (I stood up for the Flag), notice how the Tashkeel of the same word is now that slant under it (Pronounced: Alam**i**). This is because the "لـ" is an acting "letter" and it acts on the word by slanting it ("حرف جر"), a "Slanting letter" if you will. These Letters are the main reason why nouns and verbs change their form. However, "لـ" here is a slanting letter, but if I say "لالشمسُ أكبر" the same letter is now a "Swearing Letter" and it has no effect apart from meaning. Form also changes depending on the role in the Verbal Sentence: "أكل عمرُ الخبزَ", (Omar has eaten the bread). In this sentence, "عمرُ"(Omar) is the "Actor" (Or "doer", idk) of the (Eating), and "الخبزَ" is the what the action was done on. "عمرُ" has that symbol on the end of it because he's the "Actor", while "الخبزَ" has the Slant because he's the one who the act was done on. And this is not just positioning, inverting Omar and The Bread in the sentence without changing the form DOES NOT change the meaning, and changing the Tashkeel makes the sentence "Omar was eaten by The Bread"! **Context**: If we go back to the original sentence "العَلمُ جميل" this means (The flag is beautiful), but if I change not the Tashkeel at the end, but the Tashkeel of the "Stem" and make it "العِلمُ جميل" it becomes (Knowledge/Science is beautiful)! The reason behind this is that the whole Stem of the word has changed, this isn't the same morpheme anymore. So how am I to undertake this? Just collect data and bash it with a transformer? How to preprocess it? EDIT: Found an Egyptian AI company that did it. https://rdi-tashkeel.com/ ([English description](https://rdi-eg.ai/wp-content/uploads/2021/02/TASHKEEL-V4-En-final.pdf)) They say it "uses the most advanced deep-learning neural network algorithms for Tashkeel’s diacritization engine". Have no idea what that means, could be using CNN's for all I know... It has some issues when I tried it with some serious grammatical exceptions. But overall quite good. They claim 99% Accuracy in modern Arabic and 98% in classical Arabic (text-wise).
1
t3_qa1ufi
1,634,487,296
LanguageTechnology
Allen Institute for AI (AI2) Open-Sources ‘Macaw’, A Versatile, Generative Question-Answering (QA) System
OpenAI’s GPT-3 system is the best at many tasks, including question answering (QA), but it costs money and can only be used by approved users. While there are other pretrained QA systems out on the market, none has matched its few-shot performance so far. As a possible solution to the above problem, a team of researchers from AI2 has just released [**Macaw**](https://arxiv.org/pdf/2109.02593.pdf). This versatile and generative question answering system exhibits strong zero-shot performance on a wide range of questions. The best part of Macaw is that it is publicly available for free. According to a recent study, ‘Challenge300’ (300 challenge questions), Macaw outperformed GPT-3 by over 10%. This is despite the fact that it is an order of magnitude smaller (11 billion vs. 175 billion parameters). Macaw is an impressive (T5-based) language model with not quite as wide-ranging capabilities, but it’s still better than many other systems. # [5 Min Quick Read](https://www.marktechpost.com/2021/10/16/allen-institute-for-ai-ai2-open-sources-macaw-a-versatile-generative-question-answering-qa-system/)| [Paper](https://arxiv.org/pdf/2109.02593.pdf) | [Code](https://github.com/allenai/macaw)| [AI2 Blog](https://medium.com/ai2-blog/general-purpose-question-answering-with-macaw-84cd7e3af0f7)
0.95
t3_q9qebq
1,634,439,552
LanguageTechnology
Question about word to vector proccess
Hello I am very new to NLP technology, so please excuse my beginner question. So in order to use various techniques like word similarity or even transformers, it looks like the first step is to convert word to vector representation. And in order to do word to vector, it uses a huge text data like Wikipedia and run CBOW, Skipgram or others to get the features from the word. My question is that say that I want to predict a word somewhere in the sentence (like fill mask pipeline in hugginface) for very domain specific text that is not covered by Wikipedia. So my questions are: 1.Since vector is coming from Wikipedia but this data is not really covered, will it give some off/random result? 2. Say if word2vec used very domain specific dataset and I am doing word similarity from completely different domain specific, then would it give some random result as well? 3. Where I am confused is that in order to run transformer mechanisms/others, I need feature representation so would both need to use same type of domain dataset? I know Wikipedia generally covers a lot of domains, but at the same time, bit scared if there is any blind spot and that's where my data mostly came from? Thank you so much for reading and helping this!
0.67
t3_q9e55j
1,634,397,952
LanguageTechnology
Picking the right tool
I am developing an application to analyze large pdf files that include government forms, and medical records. I want to be able to identify key values in the forms, and I also want to identify key medical diagnoses, findings, symptoms, test findings, and doctor observations. I am trying to come up with a framework for determining which software tool would be suit my needs. I have looked a bit at AWS Comprehend. I just cranked an MRI through it, and none of the key findings I would want were identified ("severe stenosis", "degenerative arthritis" etc.). I of course surely am not going to throw up my hands after one test, but it makes me wonder: is it the right tool? and most specifically, how to go about determining what would be the best tool. In making this decision, I am not interested in spending buckets of money on a software product.
1
t3_q9dyqy
1,634,397,312
LanguageTechnology
Integer embeddings (LSTM vs GloVE vs BERT) [screencast tutorial]
nan
0.83
t3_q9937f
1,634,378,496
LanguageTechnology
AutoNLP - by HuggingFace was just announced
https://www.youtube.com/watch?v=y7xEDeK7KVk&ab_channel=HuggingFace Check it out! But also let me know your thoughts!
0.95
t3_q98erv
1,634,375,168
LanguageTechnology
Offline text-labeling tool
Hi! I'm working on a project involving NER and a large amount of unlabeled data. I need to label some documents to create an evaluation set. The problem is that I have to work without an internet connection. I have tried a few labeling-tools, which claimed to be available for offline use, such as doccano, but none worked. Has anyone had any experience with a labeling tool that worked offline? Thanks
1
t3_q8vxw9
1,634,327,168
LanguageTechnology
Some questions when I read the paper
Has anybody read the paper [Multi-Granularity Interaction Network for Extractive and Abstractive Multi-Document Summarization](https://aclanthology.org/2020.acl-main.556.pdf)? I am confused on the input of the decoder. What is the g\^0? (the paragraph is the upper of the equation 9) and Why the objective function use the lambda on the L\_ext?
1
t3_q8rx95
1,634,315,008
LanguageTechnology
Seeing Voices: 1 - Intro to Spectrograms [Video]
nan
0.92
t3_q8mb6d
1,634,296,704
LanguageTechnology
Japanese search engine
I want to build a search engine for Japanese. Japanese is difficult because there are no spaces between words, verbs are conjugated to show tense, negation, and politeness. Japanese is also tricky because it uses multiple character systems: two phonetic systems (one for words from Japanese, hiragana, and one for foreign words, katakana) and a symbolic systems (borrowed from Chinese). ​ What would you need to do first before you could create a tf-idf index that is different from English?
1
t3_q8c7b6
1,634,256,640
LanguageTechnology
Resume section segmentation
Hi, newbie to NLP here. I am trying to build a resume parser to extract structured data from resumes. But before doing extraction, I'm thinking of doing section segmentation. E.g. skills is its own section, then I will just extract from the section with NER then label them as skills. How do I go about this segmentation thingy? Does it involve OCR? TIA.
0.5
t3_q8evw7
1,634,265,856
LanguageTechnology
Nywspaper: comparing news using transformers
Hello everyone, I have built [nywspaper](https://nywspaper.com), a news aggregator / reader / comparison tool for my bachelor's thesis, and I am very excited to share it with you here. The goal of this tool is to make it easier for the readers to understand media bias in the news, by allowing paragraph by paragraph comparison between news articles covering the same story. When you're reading an article on nywspaper, if you click on a paragraph, you get paragraphs similar to the one you clicked from other publishers. This way you can see how a right wing news publisher delivers the same information differently than a left wing publisher. In the main page you can see articles grouped by events, and you can just navigate to the article and begin comparing. There is also a feedback button in the similar paragraph boxes, if you particularly like or dislike a paragraph that was suggested. I am really looking forward to hearing your thoughts on this tool, and if it could be used to fight media bias. I would also hugely appreciate it if you could have a chance to fill out this [survey](https://www.questionpro.com/t/AT7qPZpHzt) after you use the tool (this would help for the thesis). Thanks!
0.95
t3_q87sxl
1,634,242,432
LanguageTechnology
We Need to Talk About Data: The Importance of Data Readiness in Natural Language Processing
Hey there, We've collected our experiences on teasing out the data readiness of organizations in relation to ML/NLP projects. We describe a method comprised of 15 questions that help stakeholders gauge their data readiness, along with a way to visualize the outcome of applying the method. arXiv: [https://arxiv.org/abs/2110.05464](https://arxiv.org/abs/2110.05464) Abstract: In this paper, we identify the state of data as being an important reason for failure in applied Natural Language Processing (NLP) projects. We argue that there is a gap between academic research in NLP and its application to problems outside academia, and that this gap is rooted in poor mutual understanding between academic researchers and their non-academic peers who seek to apply research results to their operations. To foster transfer of research results from academia to non-academic settings, and the corresponding influx of requirements back to academia, we propose a method for improving the communication between researchers and external stakeholders regarding the accessibility, validity, and utility of data based on Data Readiness Levels. While still in its infancy, the method has been iterated on and applied in multiple innovation and research projects carried out with stakeholders in both the private and public sectors. Finally, we invite researchers and practitioners to share their experiences, and thus contributing to a body of work aimed at raising awareness of the importance of data readiness for NLP. And the code for the visualizations is here: GitHub: [https://github.com/fredriko/draviz](https://github.com/fredriko/draviz) I'll be happy to hear any feedback! :)
0.9
t3_q86ouv
1,634,239,104
LanguageTechnology
BERT models: how resilient are they to typos?
Hello, let me introduce the context briefly: I'm fine tuning a generic BERT model for the context of food and beverage. The final goal is a classification task. To train this model, I'm using a corpus of text gathered from blog posts, articles, magazines etc... that cover the topic. I am however facing a predicament that I don't know how to handle: specifically, there are sometimes words that either contain a typo, or maybe different accents, but that are semantically the same. Let me give you an example to briefly illustrate what I mean: The wine `Gewürztraminer` is correctly written with the `ü`, however sometimes you also find it written with just a normal `u`, or some other times even just `Gewurtz`. There are several situations like this one. Now, a human being would obviously know that we're talking exactly about the same thing, but I have absolutely no idea about how BERT would handle these situations. Would it understand that they're the same thing? Would it consider them instead to be completely different words? I am currently in the process of cleaning my training data, fixing the typos and trying to even out all these inconsistencies, but at this point I'm not even sure if I should do that at all, considering that the text that will need to be classified can potentially contain typos and situations like the one described above. What would you guys suggest?
0.97
t3_q821td
1,634,225,408
LanguageTechnology
How the context is stored in context vector in encoder decoder transformer model?
I mean I know that transformer for eg BERT can Understand the context of the paragraph but how the BERT model stored the context. I understand that word can be put into vector using on hot encoding or any other approach but storing context into vector I don't get it at all. Please help.
1
t3_q7zvj7
1,634,218,624
LanguageTechnology
On Self-Service, Data Democratization and (Natural) Language
nan
1
t3_q7yqws
1,634,214,784
LanguageTechnology
Dense vectors for NLP (and some vision)
Hi all, I put together an [article and video](https://www.pinecone.io/learn/dense-vector-embeddings-nlp/) on a few of the coolest (and useful of course) embeddings for NLP, and also text-image with OpenAI's CLIP at the end. Planing on diving into each area in more depth in the future! Let me know what you think, if I'm missing anything or you have any questions! Thanks!
1
t3_q7y7pn
1,634,212,736
LanguageTechnology
Sentiment analysis on software engineering texts
What are the possible ways to improve sentiment dictionaries to analyse SE texts? There are several SE specific sentiment dictionaries but cannot expect much accuracy when analysing open-source projects. Thank you
1
t3_q7ueud
1,634,194,816
LanguageTechnology
I have some problems with understading how LSTM can solve Sentiment Analysis.
I already got a grip of how to solve Sentiment Analysis problem ( pre-processing dataset, word embedding, feed word-vectors to LSTM structure and boom, I have a model that can predict a sentence is positive or negative), what I still don't understand is what LSTM layer do with word-vectors. Does it use them to understand the meaning of sentence, if so, how does it do it? Finally, when it understood the meaning, how can it know the sentence is positive or negative?
0.92
t3_q7r5mj
1,634,181,120
LanguageTechnology
Ways to reduce memory consumption in Q&A tasks without damage (or at least, not that much) the accuracy?
i’m facing this problem: I’m trying to spend less memory in my Q&A task using bert. I debugged my steps and saw that the start\_logits and end\_logits >start\_logits, end\_logits = model(\*\*inputs) costs more than 11gb of ram. Is there any ways to solve this? I mean, use less memory to perform this task without harm my model accuracy? If so, can someone share some of them? And some alternative ways in case is not possible to do this?
1
t3_q7ppn0
1,634,176,000
LanguageTechnology
Cambridge Quantum (CQ) Open-Sources ‘lambeq’: A Python Library For Experimental Quantum Natural Language Processing (QNLP)
[Cambridge Quantum (“CQ”)](https://cambridgequantum.com/) announced the release of the world’s first toolkit and an [open-source library ](https://github.com/CQCL/lambeq)for Quantum Natural Language Processing (QNLP), called [‘lambeq’](https://arxiv.org/abs/2110.04236). Speaking in simple words, ‘lambeq’ is the toolkit for QNLP (Quantum Natural Language Processing) to convert sentences into a quantum circuit. It can be used to accelerate development in practical, real-world applications such as automated dialogue systems and text mining, among other things. ‘lambeq’ has been released on a fully [open-sourced basis](https://github.com/CQCL/lambeq) for the benefit of all quantum computing researchers and developers. Lambeq seamlessly integrates with CQ’s (Cambridge Quantum) TKET, the world’s leading and fastest-growing quantum software development platform that is also fully open-sourced. The open-sourcing of this technology provides QNLP developers with an even broader range for their work. # [Quick 3 Min Read](https://www.marktechpost.com/2021/10/13/cambridge-quantum-cq-open-sources-lambeq-a-python-library-for-experimental-quantum-natural-language-processing-qnlp/) | [Paper](https://arxiv.org/abs/2110.04236) | [Github](https://github.com/CQCL/lambeq) | [Documentation](https://cqcl.github.io/lambeq/) |[CQ Blog](https://medium.com/cambridge-quantum-computing/quantum-natural-language-processing-ii-6b6a44b319b2)
1
t3_q7lfap
1,634,161,536
LanguageTechnology
Fresh Machine Translation benchmark study: 29 MT engines, 13 language pairs, 7 domains (Aug 2021)
Fresh Machine Translation benchmark study: 29 MT engines, 13 language pairs, 7 domains (Aug 2021) Hi folks, we've just published our new State of the Machine Translation 2021 report we have prepared together with TAUS https://hubs.la/H0ZbJhN0 Every year we release an independent multi-domain evaluation of MT engines to help you choose the best-fit providers for your language pair and industry sector. In this year’s edition, we analyzed 29 commercial and open-source MT engines across 13 language pairs and 7 key domains, including Healthcare, Education, Financial, Legal, Hospitality, and General. We also explain what scores to use for MT evaluation. Happy reading, and please share your questions and ideas afterward!
1
t3_q7jr8p
1,634,156,416
LanguageTechnology
Job opportunities for a fellow linguist?
Hello folks, first time posting here, I bring a potentially different question. My girlfriend is a newly graduated linguist applying for a Master's Degree in Linguistics. She doesn't have a computer science or mathematics background. But we were looking online and some NLP job openings do seem to exist for linguists and natural language teachers/researchers. I am a computer scientist with a background in programming languages, and I have some (albeit not deep) knowledge of machine learning. We are looking for a way to get her into a more company-oriented career, rather than an academic one as a lecturer. Always good to have options. What are your thoughts on this? Could she potentially land a NLP-related job? How much of a statistics/machine learning/computer science background would she have to develop?
1
t3_q7hq6t
1,634,150,656
LanguageTechnology
Microsoft and NVIDIA AI Introduces MT-NLG: The Largest and Most Powerful Monolithic Transformer Language NLP Model
Transformer-based language models have made rapid progress in many natural language processing (NLP) applications, thanks to the availability of large datasets, large computation at scale, and advanced algorithms and software to train these models. The high-performing language models need many parameters, a lot of data, and a lot of training time to develop a richer, more sophisticated understanding of language. As a result, they generalize well as effective zero– or few–shot learners on various NLP tasks and datasets with high accuracy. However, training such models is problematic for two reasons: * The parameters of these models can no longer be fit into the memory of even the most powerful GPU. * Special attention is required for optimizing the algorithms, software, and hardware stack as a whole. If proper attention is not provided, the large number of computing operations required can result in unrealistically long training times. Microsoft and NVIDIA present the Megatron-Turing Natural Language Generation model (MT-NLG), powered by [DeepSpeed ](https://github.com/microsoft/DeepSpeed)and [Megatron](https://github.com/NVIDIA/Megatron-LM), the largest and robust monolithic transformer language model trained with 530 billion parameters. # [5 Min-Quick Read](https://www.marktechpost.com/2021/10/13/microsoft-and-nvidia-ai-introduces-mt-nlg-the-largest-and-most-powerful-monolithic-transformer-language-nlp-model/) | [Microsoft Blog](https://www.microsoft.com/en-us/research/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/) ​
0.94
t3_q7fx3x
1,634,145,408
LanguageTechnology
Label unstructured data using Enterprise Knowledge Graphs
hi. I have published a new blogpost about entity linking with domain-specific enterprise KGs: [https://revenkoartem.medium.com/label-unstructured-data-using-enterprise-knowledge-graphs-3-ca3cd1b14a36](https://revenkoartem.medium.com/label-unstructured-data-using-enterprise-knowledge-graphs-3-ca3cd1b14a36) there is also a piece of code that allows to train the model and try it out.
1
t3_q7amqo
1,634,130,048
LanguageTechnology
Hello, I'm getting into NLP and wandering if I should start with a project or normal courses.
I need to learn NLP for a position and need help on whether to take a project to learn with or start with a book/course (One that seems interesting is [this](https://web.stanford.edu/%7Ejurafsky/slp3/ed3book_sep212021.pdf).) Background: I'm already familiar with DNN's and a bit familiar with CNN's and their architectures. Already know what LSTM is. A project that I want to do is an Arabic document (mostly books) summarizer. Which should I do?
0.81
t3_q78mna
1,634,122,624
LanguageTechnology
An illustrated tour of wav2vec 2.0
When Transformers started getting popular for NLP, we saw great visualizations to understand better the internals of these models like The Illustrated BERT, GPT... I haven't seen much like that for speech processing, so I wrote this quick post to illustrate the architecture and pre-training process of wav2vec 2.0 (now part of the HuggingFace library). [https://jonathanbgn.com/2021/09/30/illustrated-wav2vec-2.html](https://jonathanbgn.com/2021/09/30/illustrated-wav2vec-2.html) Hope this is useful : )
0.9
t3_q76fa4
1,634,112,128
LanguageTechnology
Machine Translation With Sequence To Sequence Models And Dot Attention Mechanism
nan
0.8
t3_q6xxox
1,634,079,744
LanguageTechnology
A tutorial on how to create quick NLP Text Generation Using Gradient Workflows and GitHub
nan
0.75
t3_q6vlv4
1,634,072,448
LanguageTechnology
Sentiment Analysis on Bug reports' description
Does the sentiment analysis on bug reports' description field important to severity prediction? If it is, what things can be done to improve the process? Thank you for your kind replies.
1
t3_q6ta1d
1,634,065,664
LanguageTechnology
JAX/Flax speedup on HuggingFace
A friend of mine pointed out the faster compute times of JAX/Flax vs PyTorch testing by HuggingFace [over here](https://github.com/huggingface/transformers/tree/master/examples/flax), maybe I'm just late to the party but they're pretty significant, MLM training for example is 15h32m with JAX/Flax vs 23h46m with PyTorch/XLA Thought it was cool, maybe a good idea to put some time into JAX
1
t3_q6s7xh
1,634,062,592
LanguageTechnology
[Spacy and Yake] 107+ million journal articles, mined: the General Index (4.7 TiB)
nan
1
t3_q6rpa3
1,634,061,056
LanguageTechnology
How do I specify a max character length per sentence for summarization using transformers (or something else!)?
Hi there, I am exploring different summarization models for news articles and am struggling to work out how to limit the number of characters per sentence using huggingface pipelines, or if this is even possible/a silly question to begin with! I have the following setup when being passed through the article text and model name of ‘facebook/bart-large-cnn’, ‘google/pegasus-cnn\_dailymail’ and ‘sshleifer/distilbart-cnn-6-6’: summarizer = pipeline(“summarization”, model=model\_name) summarized = summarizer(article\_text, max\_length=118, clean\_up\_tokenization\_spaces=True, truncation = True) The articles range in length from 100 words to 1000 words. I am hoping to cap the number of characters per sentence to 118, a hard cap for my application. When I set max\_length to 118 they usually are below this limit but can be, say, 220 characters or sometimes just truncate off at the end. Alternatively, if there are different summarization methods which allow limiting if it is not possible using transformers then would love to hear. Would be wonderful if someone could let me know what I’m doing wrong! Thanks a lot
0.88
t3_q6jai3
1,634,035,072
LanguageTechnology
GEC Master's Research Proposal: English or Japanese?
I am applying for a Japanese NLP master's program, and I have decided that I am interested in Grammatical Error Correction. My issue is choosing a research topic to list on the application, and particularly what language to work with. Let us assume that the jobs I would apply to in the future will be working with English. If I choose to do something in English, it is clearly the largest market and has the most research being done. I could use the latest public resources immediately and there are huge and detailed corpora. However, coming up with research ideas is proving hard for me. It seems that every time I have an idea, I search and find that it has already been done by people far beyond my level, and matters are accelerating if anything. I also feel like anything I could do would be such a drop in the bucket. On the other hand, I can do something with Japanese. However, it has a learner population of just a few million, and the native population is actually shrinking. In terms of research, there are much more gap to fill and unexplored paths, but there are fewer tools and corpora available. My Japanese level is N2/B2, so I can survive in the uni and approach text, but I probably won't be the best choice to write authoritative grammatical rules or annotations or anything. I'm really wavering trying to figure this out. To employers, does it look better to work on the language with fewer resources, since it implies that I can do at least as well in the richer environment of English? I'm hearing that the majority of Japanese NLP researchers are choosing English, and they could surely do better Japanese work than I can, so that has been worrying me as well. My core question is whether Japanese GEC is a reasonable choice for a native English speaker, but I am also open to any GEC research suggestions at all, since I am still just starting off on the proposal.
0.67
t3_q6fzfj
1,634,020,480
LanguageTechnology
Do any of you know if there is an app that based on your typing can create a list with the words of your vocabulary(and any other useful stats like spelling mistakes,...) ?
nan
0.8
t3_q68xob
1,633,995,648
LanguageTechnology
How should I engineer features for Named Entity Identification task?
I was working on Named Entity Identification (not recognition) task. In this NLP task, given a sentence, the model has to predict whether each word (aka token) is named entity or not. The dataset used was CONLL2003 dataset. Initially, I included a feature `first-letter-capital` which was `1` if a token has its first letter capitalized. The model learned to predict the first word of each sentence as a named entity. So I removed this feature and added a feature `first-letter-capital-for-non-sentence-start-word`, which was `1` if a word is not the first word of the sentence and has the first letter a capital. This made the model classify the first word of each sentence as a non-named entity. When I kept neither, the model predicted no word as a named entity. Why this might have happened? Can someone share their insight? **PS:** * I am using SVM (and I have to solve this problem with SVM only as that's what the task given to me is). * I am not using any word embedding!!! Somehow it was taking a lot of execution time with SVM (may be due to 300 dimensions of embeddings). I simply formed some features by parsing sentences / surrounding tokens of the target token (I know this simply reduces down this task to possibly non NLP simple classification task) * `first-letter-capital-for-non-sentence-start-word` required to check if the target token was the first one in the sentence. * Feature `first-letter-capital` does not need to consider surrounding tokens * There are other features too (like POS tags etc), but they are not much related here as they don't relate with a capitalization of any letter of the tokens
0.76
t3_q68ilm
1,633,994,368
LanguageTechnology
Available Filipino / Tagalog Dictionary for LIWC
Hello! I am trying to extract features from texts using the Linguistic Inquiry and Word Counter. The texts has both English and Filipino / Tagalog as its language. After checking through their documentation and asking them, they mentioned that they only have English and other certain languages except Filipino / Tagalog. But they did mention that we can use custom-made dictionaries to apply it to the text with Filipino / Tagalog language in it. So I would just like to ask if there are any available Filipino / Tagalog dictionary files that we can use for LIWC? Thanks!
0.91
t3_q5ymki
1,633,966,848
LanguageTechnology
How to compare speed between NLP models
Hey everyone, How do you compare the speed between, say, two NLP models? For example comparing one that uses Glove and one that uses Word2Vec?
1
t3_q5xp76
1,633,964,288
LanguageTechnology
Need Help With LDA Topic Modelling
Hey There I've been playing with LDA for topic modelling recently and been wondering - how do you assess the results of this model not manually? I looked for ways to do it but didn't find many interesting leads. Also - any rule of thumb for setting the number of topics? and any other useful tips you would give to a newbie in this area? TIA
1
t3_q5yfhw
1,633,966,336
LanguageTechnology
Video Series on How to Create a Virtual Assistant using Python
nan
0.67
t3_q5w7wt
1,633,960,064
LanguageTechnology
Preparing data for training NER models
Training most of the Named Entity Recognition (NER) models for example [Flair](https://github.com/flairNLP/flair) usually needs to format data in [BOI tagging](https://en.wikipedia.org/wiki/Inside-outside-beginning_(tagging)) scheme as shown below where each sentence is separated by blank line George N B-PER Washington N I-PER went V O to P O Washington N B-LOC Sam N B-PER Houston N I-PER stayed V O home N O But instead of labeled text data we have entity data separated by newline in text files, so if we process the data in above format it will look something like as below which only contains entity information George N B-PER Washington N I-PER Washington N B-LOC Sam N B-PER Houston N I-PER Is it ok if processed data looks as above
0.67
t3_q5ryjv
1,633,943,936
LanguageTechnology
Keyphrase extraction tools for non-english languages
Hey, people! Hope y'all are doing fine! **TLDR: Please share fine key phrase extraction tools for Portuguese, Spanish and English** I've been trying to find a nice key-phrase extraction tool for Portuguese, Spanish and English. However, it hasn't been a trivial task since most tools I've found require some effort for handling non-English languages and also because comparison between these tools isn't that feasible. Also some papers I've found provide no or ill maintained code, making its usage difficult. So it isn't that simple at all finding fine tools for this task.
1
t3_q5ibxw
1,633,905,024
LanguageTechnology
Findings of EMNLP 2021 Poster Presentation?
Did anyone else accepted to Findings of EMNLP 2021 receive an email from PC EMNLP-2021 asking if you wanted to present a poster at EMNLP 2021? If you filled out the attached form, have you heard any details back? Hoping I'll hear back from them soon so I can plan travel.
1
t3_q5epba
1,633,893,504
LanguageTechnology
using tf-idf vectorizer with JSON file
initially was using bow model instead of tf-idf, the input was a json file (dictionary). i found many online using just a corpus to do tf-idf but not a json file, does anyone know how to do it with json file? my json dataset is quite large
0.5
t3_q4xdvc
1,633,825,152
LanguageTechnology
When should you train a custom tokenizer/language model?
I am trying to better understand when you should train a custom tokenizer and language model for your dataset. My go-to is spaCy and prodigy, but I realize there are limitations. Training a RoBERTa model or something similar with HuggingFace seems like the MLM could give you some advantages over what I would get with spaCy models plus prodigy Active Learning, just given the robustness of the model learning the domain context. My primary cases are NER & text classification. Any suggestions or tips would be greatly appreciated.
1
t3_q4fau3
1,633,759,232
LanguageTechnology
Training NER models for detecting custom entities
Hello everyone, we are working on a task to detect certain `custom entities` in the text, we tried training [sPacy](https://spacy.io/) but it's not converging Can anyone suggest some other `Named Entity Recognition (NER)` models which can be trained to detect custom entities
0.79
t3_q4f6qw
1,633,758,592
LanguageTechnology
Google AI Introduces ‘FLAN’: An Instruction-Tuned Generalizable Language (NLP) Model To Perform Zero-Shot Tasks
To generate meaningful text, a machine learning model needs a lot of knowledge about the world and should have the ability to abstract them. While language models that have been trained to accomplish this are becoming increasingly capable of acquiring this knowledge automatically as they grow, it is unclear how to unlock this knowledge and apply it to specific real-world activities. Fine-tuning is one well-established method for doing so. It involves training a pretrained model like BERT or T5 on a labeled dataset to adjust it to a downstream job. However, it has a large number of training instances and stored model weights for each downstream job, which is not always feasible, especially for large models. A recent Google study looks into a simple technique known as instruction fine-tuning, sometimes known as instruction tuning. This entails fine-tuning a model to make it more receptive to performing NLP (Natural language processing) tasks in general rather than a specific task.  # [Google AI Blog](https://ai.googleblog.com/2021/10/introducing-flan-more-generalizable.html) | [5 Min Read](https://www.marktechpost.com/2021/10/08/google-ai-introduces-flan-an-instruction-tuned-generalizable-language-nlp-model-to-perform-zero-shot-tasks/) | [Paper](https://arxiv.org/pdf/2109.01652.pdf) | [Github](https://github.com/google-research/flan)
0.94
t3_q4btiq
1,633,744,128
LanguageTechnology
Comparative study of extractive summarization
Hello, I've been looking for a comparative study for a while that shows the characteristics of each model of State of art in a table and I didn’t find, who can help me? Example: BERT is has bi-directional encoder ✔️ multilingual ✔️ used for summarization ✔️ and other features that distinct it from other models .. Thank you
0.57
t3_q4a9ak
1,633,738,368
LanguageTechnology
Any allennlp users in this sub?
I have a whole host of questions that the official allennlp docs are unclear on - too many to post individually here - but no one to answer them. If there are any allennlp users in this sub who wouldn't mind discussing them with me one-on-one, I would appreciate it tremendously. Apologies for the nebulous post, but thank you in advance!
1
t3_q43qjg
1,633,717,760