sub
stringclasses 4
values | title
stringlengths 3
304
| selftext
stringlengths 3
30k
| upvote_ratio
float32 0.07
1
| id
stringlengths 9
9
| created_utc
float32 1.6B
1.65B
|
---|---|---|---|---|---|
LanguageTechnology | Job offer advice for new grad interested in NLP | Hi everyone.
I am a new grad trying to pick my first software engineering job. As an undergrad, I had a strong interest in NLP and did research in that area, and I would like to continue working in NLP. I am deciding between two job offers.
At the first company, I would be working in NLU/NLG teams for the company's voice assistant technologies. They also publish often, which sounds nice to me since I might consider grad school in the long term. However, they are offering me a systems role, so I will mostly be working on ML infrastructure (C++) without manipulating their models or doing core ML engineering.
At the second company, I am hired as an ML engineer, but I will be working on ranking. The tech stack is mostly Python. The downside is that I won’t be doing any NLP work.
If I want to have a career in NLP, would accepting the first company be better, even if I am not working directly on the models as an ML engineer? Or would it be most important to have the title of "ML engineer" even if I am working on a different problem area? | 1 | t3_qxwv30 | 1,637,379,328 |
LanguageTechnology | Make a bot based on social media chat data | I have very long chats with my friends like one with 4 years of constant msging and then two years of medium level messaging. I also have some groups in which i have heavily participated for the last 6 years. Can i make a chatbot type thing which chats like me? Is there anyone who has already worked on it? | 0.72 | t3_qxu7y2 | 1,637,370,496 |
LanguageTechnology | I want to spy on myself. My digital and physical stuff. I want to index and map the clutter comprehensively. Topic collection bags? In NLP what do you call this? Anyone done this before? There should be some generic recipe in some Python cookbook out there. | Rather than keep rummaging through the clutter I create while working, I want to look for labels. So let's say I have a detailed inventory of my shit, and whatever is on my file system, online accounts, email, etc.
1. What NLP recipes should I follow to build metadata and generate labels for me? **What should I learn?**
2. How do you visualize all of this? **What should I learn?** | 0.78 | t3_qxjkma | 1,637,338,624 |
LanguageTechnology | SemEval-2022 Task 09: R2VQ - Competence-based Multimodal Question Answering | FIRST CALL FOR PARTICIPATION
We invite you to participate in the SemEval-2022 Task 9: Competence-based Multimodal Question Answering (R2VQ).
The task is being held as part of SemEval-2022, and all participating team will be able to publish their system description paper in the proceedings published by ACL.
Codalab (Data download): [https://competitions.codalab.org/competitions/34056](https://competitions.codalab.org/competitions/34056)
​
Motivation
================================================
When we apply our existing knowledge to new situations, we demonstrate a kind
of understanding of how the knowledge (through tasks) is applied. When viewed
over a conceptual domain, this constitutes a competence. Competence-based
evaluations can be seen as a new approach for designing NLP challenges, in
order to better characterize the underlying operational knowledge that a
system has for a conceptual domain, rather than focusing on individual tasks.
In this shared task, we present a challenge that is reflective of linguistic
and cognitive competencies that humans have when speaking and reasoning.
​
Task Overview
================================================
Given the intuition that textual and visual information mutually inform each
other for semantic reasoning, we formulate the challenge as a competence-
based question answering (QA) task, designed to involve rich semantic
annotation and aligned text-video objects. The task is structured as question
answering pairs, querying how well a system understands the semantics of
recipes.
We adopt the concept of "question families" as outlined in the CLEVR dataset
(Johnson et al., 2017). While some question families naturally transfer over
from the VQA domain (e.g., integer comparison, counting), other concepts such
as ellipsis and object lifespan must be employed to cover the full extent of
competency within procedural texts.
​
Data Content
================================================
We have built the R2VQ (Recipe Reading and Video Question Answering) dataset, a dataset consisting of a collection of recipes sourced from [https://recipes.fandom.com/wiki/Recipes\_Wiki](https://recipes.fandom.com/wiki/Recipes_Wiki) and [foodista.com](https://foodista.com), and labeled according to three distinct annotation layers: (i) Cooking Role Labeling (CRL), (ii) Semantic Role Labeling (SRL), and (iii) aligned image frames taken from creative commons cooking videos downloaded from YouTube. It consists of 1,000 recipes, with 800 to be used as training, and 100 recipes each for validation and testing. Participating systems will be exposed to the aforementioned multimodal training set, and will be asked to provide answers to unseen queries exploiting (i) visual and textual information jointly, or (ii) textual information only.
​
Task Website and Codalab Submission site: [https://competitions.codalab.org/competitions/34056](https://competitions.codalab.org/competitions/34056)
Mailing List: [semeval-2022-task9@googlegroups.com](mailto:semeval-2022-task9@googlegroups.com)
​
Important Dates
================================================
Training data available: October 15, 2021
Validation data available: December 3, 2021
Evaluation data ready: December 3, 2021
Evaluation start: January 10, 2021
Evaluation end: January 31, 2022
System Description Paper submissions due: February 23, 2022
Notification to authors: March 31, 2022
​
Organization
================================================
James Pustejovsky, Brandeis University, jamesp@brandeis.edu
Jingxuan Tu, Brandeis University, jxtu@brandeis.edu
Marco Maru, Sapienza University of Rome, maru@di.uniroma1.it
Simone Conia, Sapienza University of Rome, conia@di.uniroma1.it
Roberto Navigli, Sapienza University of Rome, navigli@diag.uniroma1.it
Kyeongmin Rim, Brandeis University, [krim@brandeis.edu](mailto:krim@brandeis.edu)
Kelley Lynch, Brandeis University, kmlynch@brandeis.edu
Richard Brutti, Brandeis University, richardbrutti@brandeis.edu
Eben Holderness, Brandeis University, [egh@brandeis.edu](mailto:egh@brandeis.edu) | 1 | t3_qxj7hi | 1,637,337,600 |
LanguageTechnology | AI-Based Generative Writing Models Frequently ‘Copy and Paste’ Source Data | nan | 1 | t3_qxepnh | 1,637,323,136 |
LanguageTechnology | Text Mining Project on eating disorders & social networks: help us! | We are a team of academic researchers interested in psychology and natural language use. We are currently interested in gathering some data from people in Social Media.
We would greatly appreciate it **if you could fill the questionnaire attached.** **It only takes 2 minutes :)**
It is a standard inventory of questions used by psychologists. Note that the questionnaire contains a field in which the respondent has to provide his/her Reddit username. This would help us to link word use (as extracted from your Reddit's public submissions) with your responses to the questionnaire.
Of course, we will treat the information you provide with the utmost confidentiality and privacy. All information we will extract from Reddit will be anonymised and we will be the only one capable of connecting your username with your postings and your questionnaire. Such information will be kept in an encrypted file and will not be disclosed to anybody.
Link to the questionnaire: [https://forms.gle/PkWyB64aAu6BQTqi6](https://forms.gle/PkWyB64aAu6BQTqi6)
David E. Losada, Univ. Santiago de Compostela, Spain ([david.losada@usc.es](mailto:david.losada@usc.es))
Fabio Crestani, Univ. della Svizzera Italiana, Switzerland ([fabio.crestani@usi.ch](mailto:fabio.crestani@usi.ch))
Javier Parapar, Univ. A Coruña, Spain ([javierparapar@udc.es](mailto:javierparapar@udc.es))
Patricia Martin-Rodilla, Univ. A Coruña, Spain ([patricia.martin.rodilla@udc.es](mailto:patricia.martin.rodilla@udc.es) ) | 0.86 | t3_qxd9tp | 1,637,316,992 |
LanguageTechnology | Any RASA users out there? Setting variables based on response text chosen | I am creating a chatbot but the use case is a little bizarre. More specifically, the chatbot will be playing the role of someone asking questions, and I will be acting as the customer service representative. So for example, I will type “How can I help you?”, and the chatbot will respond with something like "What are the meal options on flight ABC?"I already have a massive list of potential questions the chatbot could ask.
So as you might expect, there will be many text options for when I type “How can I help you”…from can I get a flight from A to B on day X to what airports are available in state Y? From what I understand, if there are many text options under something like utter\_help\_me under responses in domain, a text will be chosen randomly. But I want variables to be set based on what text is randomly chosen. Is there a way to do that? I know this is usually done based on what is typed and not what is returned in a response, so this is the strange part | 0.67 | t3_qx7h9s | 1,637,293,696 |
LanguageTechnology | Deploying Serverless spaCy Transformer Model with AWS Lambda | In this article, we show you how to push an NER spacy transformer model to Huggingface and deploy the model on [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) to run predictions. **Deploying models without the need to manage backend servers** will enable developers and small startups who do not have devops resources to start deploying models ready to use in production.
Full Article => [https://towardsdatascience.com/deploying-serverless-spacy-transformer-model-with-aws-lambda-364b51c42999](https://towardsdatascience.com/deploying-serverless-spacy-transformer-model-with-aws-lambda-364b51c42999) | 1 | t3_qwxrhc | 1,637,264,384 |
LanguageTechnology | Collaboration Request for SemEval-2022 | Hello,
I’ve been working to compete in Task 4: ‘Patronizing and condescending Language Detection’ (SemEval 2022). I implemented few baselines and did some experiments. Also got several ideas for further improvement. I’m looking for a team member to speed up the work process. Anyone interested can ping me.
Looking forward for responses! | 1 | t3_qw42wb | 1,637,169,408 |
LanguageTechnology | spaCy's config and project systems | nan | 1 | t3_qw311q | 1,637,166,592 |
LanguageTechnology | Recommendations for pre-trained word embedding models. | Does anyone have any (preferably) free dataset recommendations outside of Google’s news vector model (too specific/scientific at times) and the Common Crawl’s model (too many typos) for generating similarity scores?
Gensim, KeyedVectors, usually .bin or .vec files as input I believe. | 0.86 | t3_qvwzuw | 1,637,147,392 |
LanguageTechnology | Hardware used for dev work at enterprise company | I'm curious to know what people are using for dev hardware at large companies - with particular interest in hearing from those who work with large language models. is anyone surviving without a GPU? OS info helpful as well. | 0.5 | t3_qvn6tf | 1,637,111,168 |
LanguageTechnology | Work regarding probing language models for semantics? | Hi. I'm taking a look at some literature regarding language model probing and have noticed that there's a lot of work that's been done focusing on whether language models have _syntactic_ properties, but I haven't really seen a lot of work probing the semantic capabilities of these models and what information is being used during inference.
I think the closest I've found is a paper titled [_Sorting Through the Noise: Testing Robustness of Information Processing in Pre-trained Language Models (Pandia and Ettinger, 2021)_](https://arxiv.org/abs/2109.12393) but I'm not entirely sure if this paper dives into analyzing what kind of semantic information is being used by LMs and rather focuses on fail cases of LMs.
Just wondering if anyone here can give me some pointers or recommendations. Thanks! | 1 | t3_qvmfrv | 1,637,108,864 |
LanguageTechnology | Using Python to extract highest occurring and most unique keywords in text? | Suppose I have a column where each cell has a description of a product. What packages/ algorithms can I use to tag each description with the keywords that are both highest occurring while also being relatively unique to the text? For example, if half the products are toys, words like "child", "toy", "learning", etc, which are not typical stop words, become less important to analysis. | 1 | t3_qvctdw | 1,637,083,008 |
LanguageTechnology | Do you think NLP will be able to comprehend linguistic typology? | The idea behind linguistic typology is that there are patterns common to all languages. These patterns repeat themselves at different levels. They are also specific to individual languages.
Linguistic classification organizes languages based on structural features, patterns, and linguistic units. It offers a systematic way of grouping languages to discover linguistic properties shared by these languages.
Since linguistic classification involves collecting and analyzing data from various sources (fieldwork, literature, language documentation, linguistic atlas, etc.), could something like GPT-3 be able to comprehend it?
I’m referring mainly to the translation and localization field.
Algorithms are currently unable to grasp the context and nuances of a text. This means that we still need human translation to interpret cultural references and preserve the style and intention of the original text.
How long do you think it will take for AI to surpass a human translator?
My question is based on [this article](https://www.oneskyapp.com/blog/how-linguistic-typology-helps-us-understand-languages/?utm_source=reddit&utm_medium=language-technology) that goes over linguistic typology and why it makes human translators indispensable in localization processes. | 0.86 | t3_qv8rw9 | 1,637,072,256 |
LanguageTechnology | Webinar with creator of sentence transformers later today | I figured a few of you will be interested in this, we have a webinar later today (11AM ET) where the creator of SBERT and the sentence transformers library - Nils Reimers - will be talking about semantic search and fine-tuning sentence transformers. It'll cover how to build sentence vectors with SBERT, I assume a little on Hugging Face, and how to index and perform a semantic search using Pinecone.
[Registration link is here!](https://pinecone-io.zoom.us/webinar/register/1416360828695/WN_FNyqH2EsTnesF3Rh9-QSHA?utm_source=sendgrid.com&utm_medium=email&utm_campaign=email)
Hope it's useful, thanks :) | 1 | t3_qv5p8e | 1,637,061,760 |
LanguageTechnology | Advices on my bachelor's thesis topic | Hello everyone.
This is my last year of uni, I'm taking computer science major. Now I'm thinking about my thesis topic.
My hobby is also learning foreign languages so I have to use a machine translators like Google translator, Papago, etc. As you might know, those are kind of... not really good, especially with certain languages.
So I thought, "Hm, what if I build my own translator?"
But is it, you know, possible to build a translator that would work better than those that so many people work on for years?
I'm interested in Russian-Korean or Russian-Japanese and backwards translator because those are the languages I learn, and when I translate text in those languages Google Translate it makes no sense most of the time.
I also think that it might be a bit too much for just an undergraduate thesis? If you maybe have some ideas related to this problem that are not so complex, I would be glad to read them. | 0.67 | t3_qv3m1r | 1,637,053,184 |
LanguageTechnology | Talk with AI (NLP) Model | nan | 1 | t3_qv0f2i | 1,637,040,256 |
LanguageTechnology | GPT-J through API + training on custom datasets | Anybody checked out Eleuter’s new GPT-J yet?
I feel like it’s on par with OpenAI’s Curie. It pretty good overall for inference, but I thought it would be cool to fine-tune it on a custom dataset.
I personally found it hard to do because of the lack of resources out there, so I ended up putting together this project to simplify custom training of GPT-J and deployment to production after the training. Both can be done through a web interface I built. Also, I added a default pre-trained GPT-J to use through an interface or API too.
Please, check it out and give me feedback if you can! [https://www.tensorbox.ai/](https://www.tensorbox.ai/) | 1 | t3_quskm5 | 1,637,016,192 |
LanguageTechnology | Daily digest of new NLP Research Papers | Hi Everyone, is there any website or subscription where we can get daily digest of top new Research Papers submitted in NLP or any of it's subfields | 0.97 | t3_quox74 | 1,637,006,208 |
LanguageTechnology | quality dimension for alignment dataset | Hi guys, i'm data science student and i'm working on NLG data-to-text task. I'm seeking How to evaluate my alignment dataset with adequate quality dimensions. I'm newie in this field, for now i'm reading these excellent papers:
\- A Survey of Evaluation Metrics Used for NLG Ststems: [https://arxiv.org/abs/2008.12009](https://arxiv.org/abs/2008.12009)
\- Survey of the State of the Art in Natural Language Generation: Core tasks, applications: [https://arxiv.org/abs/1703.09902](https://arxiv.org/abs/1703.09902)
As evaluation metrics, I was planning to use the following:
BLEU, NIST, GTM, METEOR, ROUGE, Vector Extrema, MoveScore, BLEURT, PARENT.
However, it's still not clear to me what quality dimensions mean and which ones I should use for. Could someone direct me to some specific paper? Thank you very much and sorry for my inexperience.
​
Thanks all.
​
# | 1 | t3_quhwck | 1,636,987,136 |
LanguageTechnology | Why language pars are used the most in the evaluation of machine translation models and why? | The title should be "what language pairs" instead of "why language pars".
In my experience I see English-German and English-Romanian very frequently. Not sure why that is the case. | 1 | t3_quaw5s | 1,636,961,280 |
LanguageTechnology | Normalizing Named Entities | [Machine-Guided Polymer Knowledge Extraction Using Natural Language Processing: The Example of Named Entity Normalization](https://pubs.acs.org/doi/abs/10.1021/acs.jcim.1c00554)
This paper talks about Supervised Clustering methods for Named Entity Normalization (NEN), a sometimes overlooked but very important area of Information Extraction. We cluster the variations of name with which chemical entities are referred to in literature. We establish the advantage of fastText embeddings over Word2Vec embeddings and show that parameterized cosine distance as well as ensembling of models lead to performance gains for NEN. This is also one of the few works to study normalization for named entities for a niche domain, i.e., polymers. This dataset is one of the biggest out there for normalization and presents unique challenges not present in general English text as the cluster sizes are much larger and cluster size variance is greater than typical synonym clusters.
The code and data for this paper are available [here](https://github.com/Ramprasad-Group/polymerNEN). Consider using this data set for bench marking and evaluation purposes if you are working in this area. | 0.9 | t3_qu4zpo | 1,636,940,544 |
LanguageTechnology | Question about statistics and algebra for NLP | I'm a journalist and freelance translator and I worked in the banking system in my country for many years. A couple of years ago I decided I wanted to get into data science, took a few practical courses and got a job for a consulting company, building simple models for businesses. Nothing too technical.
For about six weeks now I've been getting into NLP to tie my past experiences with my present ones. But I want to dive deeper into the inner workings of NLP to professionalize my profile a little bit.
What topics do you think I should focus on? I'm particularly interested in learning the basics of statistics and algebra oriented for NLP but I don't know where to start. Thanks in advance! | 0.81 | t3_qtwga4 | 1,636,915,712 |
LanguageTechnology | CMU Researchers Develop A Unified Framework For Evaluating Natural Language Generation (NLG) | Natural language generation (NLG) is a broad term that encompasses a variety of tasks that generate fluent text from input data and other contextual information. In actuality, the goals of these jobs are frequently very different. Some well-known instances of NLG include compressing a source article into a brief paragraph conveying the most significant information, converting content presented in one language into another, and creating unique responses to drive the discourse.
Natural language processing has advanced at a breakneck pace in terms of enhancing and developing new models for various jobs. However, assessing NLG remains difficult: human judgment is considered the gold standard, but it is typically costly and time-consuming to get. Automatic evaluation, on the other hand, is scalable, but it’s also time-consuming and challenging. This problem originates because each work has varied quality requirements, making it difficult to establish what to assess and how to measure it.
Researchers from Carnegie Mellon University, Petuum Inc., MBZUAI and UC San Diego recently took a step in this direction by developing [a single framework for NLG evaluation](https://arxiv.org/pdf/2109.06379.pdf) that makes it easier to create metrics for various language generation tasks and characteristics.
# [Quick Read](https://www.marktechpost.com/2021/11/13/researchers-develop-a-unified-framework-for-evaluating-natural-language-generation-nlg/) | [Paper](https://arxiv.org/pdf/2109.06379.pdf)| [Code](https://github.com/tanyuqian/ctc-gen-eval) | [CMU Blog](https://blog.ml.cmu.edu/2021/10/29/compression-transduction-and-creation-a-unified-framework-for-evaluating-natural-language-generation/) | 1 | t3_qtic2e | 1,636,865,408 |
LanguageTechnology | Meta AI Open-Sourced It’s First-Ever Multilingual Model (Won The WMT Competition): A Step Towards Future Of Machine Translation | Machine translation (MT) is the process of employing artificial intelligence to automatically translate text from one language (the source) to another (the destination) (AI). The ultimate goal is to create a universal translation system that will allow everyone to access information and communicate more effectively. It is a long road ahead for this vision to turn into reality.
Most currently used MT systems are bilingual models, which require labeled examples for each language pair and job. Such models are, however, unsuitable for languages with insufficient training data. Its enormous complexity makes it impossible to scale to practical applications such as Facebook, where billions of users post in hundreds of languages every day.
To address this problem and develop [a universal translator,](https://ai.facebook.com/blog/the-first-ever-multilingual-model-to-win-wmt-beating-out-bilingual-models/) the MT field must witness a transition from bilingual to multilingual models. A single translation model is used to process numerous languages in multilingual machine translation. The research would attain its peak if it were possible to build a single model for translation across as many languages as possible by effectively using the available linguistic resources.
# [Quick Read](https://www.marktechpost.com/2021/11/13/meta-ai-open-sourced-its-first-ever-multilingual-model-won-the-wmt-competition-a-step-towards-future-of-machine-translation/) | [Paper](https://arxiv.org/pdf/2108.03265.pdf) | [Github](https://github.com/pytorch/fairseq/tree/main/examples/wmt21?) | [Meta Blog](https://ai.facebook.com/blog/the-first-ever-multilingual-model-to-win-wmt-beating-out-bilingual-models/) | 0.91 | t3_qt5436 | 1,636,822,144 |
LanguageTechnology | Text Classification Master Thesis in NLP | Hello! I've made some research online about some ideas for a text classification project but most likely i have some doubts about them.
So, my teacher offered me an idea \*to make only text classification with comparison between some methods.\*.
So i was thinking, between this and maybe doing a popular topic like "Fake News Detection" or "Sentiment analysis" is it a good idea?
I'm asking that more because as far this master goes, i want an easier project thesis because of some related personal problems.
I'm also opened for more ideas if you have some. | 1 | t3_qszw9g | 1,636,804,992 |
LanguageTechnology | NLP switch advice for bio | Hi. Could anyone working in NLP shoot me some advice?
I'm trying to switch to NLP based work. I'm a biologist/bioinformatician (M.S.) and I've done ML with computer vision in industry. I've even turned down a pretty nice job offer with computer vision, but it had restrictions and not NLP focused so I turned it down.
My goals are to get a jump start on NLP for bioinformatics with protein and gene language models. To that effect, I've been studying pytorch and NLP from scratch. I expect to have a working understand of transformer/BERT based langauge models and a decent example or two to start applying for biology based NLP.
However, I'm afraid I'm a bit too early for gainful employment strictly working with proteins and genes given the sporadic appearence of job postings.
To summarize, I've got a bio background, I've done ML in industry with computer vision and I am prioritizing a research career using NLP and biology. Today, I *think* I would like a job where I can work with NLP in some context, with enough of a salary I can live comfortably in Spain as a remote worker (am American). I'd like to do this until more opportunity appears.
A few questions;
1) Are there places other than LinkedIn you seek NLP jobs?
2) What skiills can get me a remote NLP job?
- I've learned the basics of Flask, I would continue figuring this out to serve models on the cloud and make them accessible via REST APIs if it would greatly increase my chances at a paid remote gig.
- I could do something with huggingface, but I don't know what general project would be good to get non-bio jobs
-Coding models with JAX?
3) Would you recommend a different path on the short-term?
- focus on finding a computer vision job (And use transformer models to gain more transferable knowledge to future career)
- focus on learning the mininmal backend/webdev to get a job to get a paycheck on the short-term?
Any pertinent advice would be appreciated. This is my dead-set goal, but I don't really want to get side-tracked or accept an offer that would require me to establish myself somewhere physically, or do a PhD.
Thanks | 0.87 | t3_qszvx8 | 1,636,804,864 |
LanguageTechnology | Are there anyone studying CS224n from Stanford? | Hello guys, if there're anyone else who's studying this course and want to collaborate on homeworks and in general toss me a message! | 0.81 | t3_qsymqv | 1,636,799,744 |
LanguageTechnology | University of Waterloo AI Researchers Introduce A New NLP Model ‘AfriBERTa’ For African Languages Using Deep Learning Techniques | A technology that has been around for years but most often taken for granted is Natural Language Processing(NLP). It is the employment of computational methods to analyze and synthesize natural language and speech. Pre-trained multilingual language models have proven efficient in performing various downstream NLP tasks like sequence labeling and document classification.
The notion behind designing pre-trained models is to build a black box that comprehends the language that can then be instructed to perform any task in that language. The goal is to construct a machine that can replace a ‘well-read’ human. However, these models require large chunks of training data to build them. As a result, the world’s under-resourced languages are left out from being explored.
Researchers from the David R. Cheriton School of Computer Science at the University of Waterloo dispute this assumption and introduce [AfriBERTa](https://aclanthology.org/2021.mrl-1.11.pdf). This new neural network model leverages deep-learning approaches to generate state-of-the-art outcomes for under-resourced languages. The researchers show that they can build competitive multilingual language models with less than 1 GB of text. Their AfriBERTa model covers 11 African languages, four of which have never had a language model before.
# [Quick Read](https://www.marktechpost.com/2021/11/12/university-of-waterloo-ai-researchers-introduce-a-new-nlp-model-afriberta-for-african-languages-using-deep-learning-techniques/)| [Paper](https://aclanthology.org/2021.mrl-1.11.pdf) | [Code](https://github.com/keleog/afriberta) | 0.9 | t3_qsqal7 | 1,636,767,616 |
LanguageTechnology | Could you give examples of types of NLP projects you worked on at work in real business scenarios? | I get the impression that Kaggle competitions aren't reflective of real-world applications of data science in NLP, and common NLP examples like chatbots, search engines, and grammar checking are not necessarily the majority of real-world projects either? Am I wrong? Or are real-world business applications of NLP really quite different and unique compared to the examples I just mentioned?
Could some of you in the field give me examples of what real-world business projects look like? I want to get a feel of what working in NLP as a data scientist would be like.
Side question, is there normally not enough work to go around to just focus on NLP alone as a career, and do you have to do computer vision or other subfields of data science in a typical work setting? | 0.87 | t3_qsofis | 1,636,761,728 |
LanguageTechnology | Spacy vs NLTK for Spanish Language Statistical Tasks | Hey all,
I have some experience using both NLTK and Spacy for different NLP tasks. I find myself wanting to gravitate towards Spacy becuase of their community and documentation, but I can't help feeling that for my specific use case NLTK may be the better route.
My idea is to scrape the entirety of a Spanish news site and analyze the content of all their news articles. I want to answer questions such as:
1. What are the top 100 most frequent words used among all their articles.
2. What is the lexical diversity across the entire site (and perhaps per article so that I can try to predict which articles are easier to read for non native learners)
3. What are the most common n-grams across the entire site to help learners know what vocabulary to study.
Between NLTK and Spacy, which framework is better for completing tasks such as the above? My guess is both can do it, but I wonder if one is better suited for it than another.
Thanks! | 1 | t3_qsld7i | 1,636,752,640 |
LanguageTechnology | Create semantic search applications with machine-learning workflows | ​
https://reddit.com/link/qs9ajk/video/slkwnfkqh5z71/player
Create semantic search applications with machine-learning workflows. The demo above shows how various NLP pipelines can be connected together to build a semantic search application.
txtai executes machine-learning workflows to transform data and build AI-powered semantic search applications. txtai has support for processing both unstructured and structured data. Structured or tabular data is grouped into rows and columns. This can be a spreadsheet, an API call that returns JSON or XML or even list of key-value pairs.
Some example workflows:
* Summarize news articles
* Summarize and translate research papers
* Load and index data via a CSV
* Schedule a recurring job to query an API and index results for semantic search
References:
[Live Demo](https://huggingface.co/spaces/NeuML/txtai)
[GitHub](https://github.com/neuml/txtai)
[Article](https://towardsdatascience.com/run-machine-learning-workflows-to-transform-data-and-build-ai-powered-text-indices-with-txtai-43d769b566a7)
[Notebook](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/14_Run_pipeline_workflows.ipynb)
[Notebook](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/22_Transform_tabular_data_with_composable_workflows.ipynb) | 0.89 | t3_qs9ajk | 1,636,716,928 |
LanguageTechnology | Speech recognition hackathon (ends Nov. 17) | nan | 1 | t3_qrya6s | 1,636,674,176 |
LanguageTechnology | Macaw - Question Answering NLP Model - Applied NLP Tutorial with Python Code | nan | 0.75 | t3_qrtokz | 1,636,660,992 |
LanguageTechnology | Explicit content detector python | Hello !
I want to build a project thats aimed to detect explicit content in texts, it's just going to flag if the text has explicit content. I already made something that detects explicit words in a text, but I want to do something more complex. As you know, you don't have to use bad words to make explicit sentences, I thought about creating a list of possible explicit sentences, but that would be infinite.
What are my chances here ? Do I have any other options ?
Thanks in advance. | 0.75 | t3_qr6qht | 1,636,585,344 |
LanguageTechnology | Experience with Context-Based Sentiment Analysis? | Sentiment analysis is a pretty standard problem. It's generally done with short input texts and is not seen as a super difficult problem. However I haven't thought about is adding context to the task—say like trying to predict the sentiment of one comment given the above comments on a Facebook post. I imagine that adding context could be as simple as concatenating the entire context (e.g. other comments and the original post) to the input given the capabilities of Transformer models. But I've never actually tried to solve a problem like this. Does anyone have experience or insights to share for a problem like this? | 1 | t3_qr5zpw | 1,636,583,296 |
LanguageTechnology | Cedille, the largest French language model, released in open source | nan | 1 | t3_qr4udb | 1,636,580,224 |
LanguageTechnology | An Introduction to Language Models in NLP (Part 1: Intuition) | nan | 0.94 | t3_qr4juj | 1,636,579,328 |
LanguageTechnology | Same Document, two OCRs, super classifier? | I have 150k documents, 10 pages each document, and two outputs, the Original OCR and my Tessearact OCR.
I've built classifiers with the Tesseract output, but am seeking ways to strengthen my model. Model in question: [https://scikit-learn.org/stable/modules/naive\_bayes.html](https://scikit-learn.org/stable/modules/naive_bayes.html) along with \`CountVectorizer(ngram\_range(2,2))\`
It dawned on me that it might be possible to somehow sandwich the Original OCR with the Tessearact OCR. Would such a thing be possible? Or even useful?
I plan to try it out, but what do you say internet?
(open to all thoughts and considerations) | 1 | t3_qr4c0c | 1,636,578,688 |
LanguageTechnology | I need an NLP model which can be trained with tabular data like biblical corpus and be able To make direct predictions. | Tapas is the closest I've come across so far but it only accept the data as input and you can't train with the data. Prediction are slow and it can't handle the size of data I have | 1 | t3_qr2ln2 | 1,636,573,952 |
LanguageTechnology | Word Sense Disambiguation. Recommendations | Hello everyone!
I'm taking a class on NLP, and I have to give a presentation on Word Sense Disambiguation, so I'm asking for any valuable resources that anyone could recommend, both theory and algorithms stuff, so that I can do my research.
Thanks in advance! | 0.5 | t3_qr13xy | 1,636,569,728 |
LanguageTechnology | Advice - University of Paris, Master | Hello everyone,
I was wondering if anyone is familiar with the Linguistics Master's program at the University of Paris?
After all, the university has only existed since 2019, having been formed from two universities that emerged from the Sorbonne, so it's probably hard to find someone who has had a lot of experience there.
But the program looks very interesting and the master program has new different specializations. In addition, the *Laboratoire De Linguistique Formelle* seems to be part of the university, which also looks very big!
Is there anyone here who has already had experience with the program? | 1 | t3_qqy0ep | 1,636,561,280 |
LanguageTechnology | Resources and Books About NLP | I'm looking to read more books and resources about NLP. Kindly share the title and resources of the book with me.
Thank you. | 0.5 | t3_qqurh9 | 1,636,551,936 |
LanguageTechnology | Autoregressive meaning | What does the word “autoregressive” mean when describing NLP models? | 1 | t3_qqr55f | 1,636,538,880 |
LanguageTechnology | SEP token | Question 1: In LMs that pretrain without NSP like objectives, the SEP tokens appear only once in the sentence at the end. So effectively it will only be trained to mark the end of the sentence, rather than something that separates sentences. If our task involves sentence pairs, we have to rely on fine-tuning for the model to learn that it is actually a separator token. Is my thinking here correct?
Question 2: Can we use multiple SEP tokens to separate more than two sentences in models like BERT? | 1 | t3_qqqu2m | 1,636,537,728 |
LanguageTechnology | How AI is transforming MarTech (featuring NLP) | nan | 1 | t3_qqqjje | 1,636,536,448 |
LanguageTechnology | MIT AI Researchers Introduce ‘PARP’: A Method To Improve The Efficiency And Performance Of A Neural Network | Recent developments in machine learning have enabled automated speech-recognition technologies, such as Siri, to learn the world’s uncommon languages, which lack the enormous volume of transcribed speech required to train algorithms. However, these methods are frequently too complicated and costly to be broadly used.
Researchers from MIT, National Taiwan University, and the University of California, Santa Barbara, have developed a simple technique that minimizes the complexity of a sophisticated speech-learning model, allowing it to run more efficiently and achieve higher performance.
Their method entails deleting unneeded components from a standard but complex speech recognition model and then making slight tweaks to recognize a given language. Teaching this model an unusual language is a low-cost and time-efficient process because only minor adjustments are required once the larger model is trimmed down to size.
# [Read The](https://arxiv.org/pdf/2106.05933.pdf) [Paper](https://arxiv.org/pdf/2106.05933.pdf) | [Checkout The](https://people.csail.mit.edu/clai24/parp/) [Project](https://people.csail.mit.edu/clai24/parp/) | [5 Min Read](https://www.marktechpost.com/2021/11/09/mit-ai-researchers-introduce-parp-a-method-to-improve-the-efficiency-and-performance-of-a-neural-network/) | [MIT Blog](https://news.mit.edu/2021/speech-recognition-uncommon-languages-1104) | 0.92 | t3_qqmhp1 | 1,636,519,936 |
LanguageTechnology | Intel Optimizes Facebook DLRM with 8x speedup (Deep Learning Recommendation Model) | nan | 1 | t3_qqgjei | 1,636,501,248 |
LanguageTechnology | Please help | Hi i am a complete beginner dumbo to NLP and want to try learning topic modeling. Is it okay to use LDA on just 16 documents. They are business reports and I would like to extract topics to assess the trends.
Omg please help !! | 0.5 | t3_qq5yxi | 1,636,471,296 |
LanguageTechnology | Language model built on LSTM? | Hey everyone! Could I get an example (or multiple, if possible) of a language model which as been built on LSTM rather than Transformers?
Thank you
Edit: preferably one that can be used with the Hugging Face API | 0.67 | t3_qq55ds | 1,636,468,992 |
LanguageTechnology | Improving Chatbot technology with NLP | Hi
I am currently studying Computer Science and am interested in writing my thesis in the field of NLP about improving chatbots. This is of course too broad of a topic itself. Where do you think the biggest obstacle in developing better chatbots currently lies?
Some topics I have been suggested so far are:
1.dialogue success/fluency
2.query intent classification
Do you agree with any of these or is there another bigger obstacle?
PS I know better is sort of vague, but what I mean is better in general human terms, so that an average users experience would be better. The current chatbots are often algorithmic, cannot answer many questions and are more like category selection/specification tools and less like AI customer service they are branded as. | 1 | t3_qq4tsg | 1,636,468,096 |
LanguageTechnology | NLPAug: what proportion of augmented sentences do you usually add to the dataset? | Hi,
We are working on an NLP problem that is near to hate speech detection.
We use a BERT neural network that has 2 outputs:
\- Rating the sentence
\- Classifying it among 16 types of speech classes
We have 12K sentences tagged in a dataset.
Since the dataset is relatively tiny, we are working on augmenting it with [NLPAug](https://github.com/makcedward/nlpaug). We use 2 strategies. Synonymisation and back translation.
I was wondering if you have experience with that, what is the usual ratio of augmented sentences in your dataset? 1/3, 1/2, 3/4...
Thanks | 0.94 | t3_qq1ok9 | 1,636,457,344 |
LanguageTechnology | Hugging Face Introduces ‘Datasets’: A Lightweight Community Library For Natural Language Processing (NLP) | ***Datasets*** is a modern NLP community library created to assist the NLP environment**.** [***Datasets*** aims to standardize end-user interfaces, versioning, and documentation while providing a lightweight front-end tha](https://arxiv.org/pdf/2109.02846.pdf)t works for tiny datasets as well as large corpora on the internet. The library’s design involves a distributed, community-driven approach to dataset addition and usage documentation. After a year of work, the library now features over 650 unique datasets, over 250 contributors and has supported many original cross-dataset research initiatives and shared tasks
[Quick Read](https://www.marktechpost.com/2021/11/08/hugging-face-introduces-datasets-a-lightweight-community-library-for-natural-language-processing-nlp/) | [Paper](https://arxiv.org/pdf/2109.02846.pdf) | [Github](https://github.com/huggingface/datasets) | 0.67 | t3_qpt6cl | 1,636,423,424 |
LanguageTechnology | SemEval 2022 Task 11: MultiCoNER Multilingual Complex Named Entity Recognition (Call for Submission) | We invite you to participate in SemEval-2022 Task 11: **Multi**lingual **Co**mplex **N**amed **E**ntity **R**ecognition (MultiCoNER).
**Task Website:** [https://multiconer.github.io/](https://multiconer.github.io/)
**Codalab (Data download + Submission):** [https://competitions.codalab.org/competitions/36044](https://competitions.codalab.org/competitions/36044)
This task focuses on the detection of complex entities, such as movie, book, music and product titles, in low context settings (short and uncased text).
The task covers 3 domains (sentences, search queries, and questions) and provides data in 11 languages: **English, Spanish, Dutch, Russian, Turkish, Korean, Farsi, German, Chinese, Hindi**, and **Bangla**. Here are some examples in English, Chinese, Bangla, Hindi, Russian, Korean, and Farsi, where entities are enclosed inside brackets with their type:
* the original **\[ferrari daytona |** **PRODUCT\]** replica driven by **\[don johnson |** **PERSON\]** in **\[miami vice |** **CreativeWork\]**
* 它 的 座 位 在 \[**圣 布 里 厄** | **LOCATION\]** .
* স্টেশনটির মালিক \[**টাউনস্কেয়ার মিডিয়া** | **CORPORATION\]** ।
* यह \[**कनेल विभाग** | **LOCATION**\] की राजधानी है।
* в основе фильма — стихотворение \[**г. сапгира** | **PERSON\]** .
* \[**블루레이 디스크** | **PRODUCT\]** : 광 기록 방식 저장매체의 하나
* \[**نینتندو** | **CORPORATION\]** / \[**باندای نامکو انترتینمنت** | **CORPORATION\]** – \[**برادران سوپر ماریو نهایی** | **CreativeWork\]**
Additionally, a **multilingual NER track** is also offered for multilingual systems that can process all languages. A **code-mixed track** allows participants to build systems that process inputs with tokens coming from two languages. For example, the following are some code-mixed examples from Turkish, Spanish, Dutch, German, and English.
* it was produced at the \[**soyuzmultfilm** | **GROUP\]** studio in \[**moskova** | **LOCATION\]** .
* \[**arturo vidal** | **PERSON\]** ( born 1987 ) , professional footballer playing for \[**fútbol club barcelona** | **GROUP\]**
* daarmee promoveerde hij toen naar de \[**premier league** | **CORPORATION\]** .
* piracy has been a part of the \[**sultanat von sulu** | **LOCATION\]** culture .
The task focuses on detecting semantically ambiguous and complex entities in short and low-context settings. Participants are welcome to build NER systems for any number of languages. And we encourage to aim for a bigger challenge of building NER systems for multiple languages. The task also aims at testing the domain adaption capability of the systems by testing on additional test sets on questions and short search queries.
We have released training data for 11 languages along with a baseline system to start with. Participants can submit their system for one language but are encouraged to aim for a bigger challenge and build multi-lingual NER systems.
**Task Website:** [https://multiconer.github.io/](https://multiconer.github.io/)
**Codalab Submission site:** [https://competitions.codalab.org/competitions/36044](https://competitions.codalab.org/competitions/36044)
**Mailing List:** [multiconer-semeval@googlegroups.com](mailto:multiconer-semeval@googlegroups.com)
**Slack Workspace:** [https://join.slack.com/t/multiconer/shared\_invite/zt-vi3g97cx-MpqTvS07XX22S78nRC2s0Q](https://join.slack.com/t/multiconer/shared_invite/zt-vi3g97cx-MpqTvS07XX22S78nRC2s0Q)
**Training Data:** [https://multiconer.github.io/dataset](https://multiconer.github.io/dataset)
**Baseline System:** [https://multiconer.github.io/baseline](https://multiconer.github.io/baseline)
**Shared task schedule:**
* Training data ready: September 3, 2021
* Evaluation data ready: December 3, 2021
* Evaluation start: January 10, 2022
* Evaluation end: by January 31, 2022 (latest date; task organizers may choose an earlier date)
* System description paper submissions due: February 23, 2022
* Notification to authors: March 31, 2022
**Task organizers**
* Shervin Malmasi (Amazon)
* Besnik Fetahu (Amazon)
* Anjie Fang (Amazon)
* Sudipta Kar (Amazon)
* Oleg Rokhlenko (Amazon)
Please reach out to the organizers at [multiconer-semeval-organizers@googlegroups.com](mailto:multiconer-semeval-organizers@googlegroups.com), or join the Slack workspace to connect with the other participants and organizers. | 0.92 | t3_qprljs | 1,636,418,432 |
LanguageTechnology | Is it possible to do an Aspect Based Sentiment Analysis using XLNet? | Hi everyone,
I am doing an Aspect Based Sentiment Analysis using BERT Model, however, I noticed that the state of art XLNet model over performed the BERT model in most of NLP applications. I couldn't see any implementation for Aspect Based Sentiment Analysis on Internet , so I am curious if it is possible to do it? | 1 | t3_qpdk9p | 1,636,377,856 |
LanguageTechnology | Get all the topics from a given text. | I am a complete newbie to NLP. I have a situation in front of me:
Suppose there is a (finite) set (**A**) of topics, for example- environment, space technology, tribal development, economics, politics, etc.
I have **another set (B)** of a large number of texts, each containing about 100 to 500 words each.
I have to classify every piece of text against the **given** set (A) of topics only, for example:
**Text 1 ->**
"Deforestation of the Amazon rainforest in Brazil has surged to its highest level since 2008, the country's space agency (Inpe) reports. A total of 11,088 sq km (4,281 sq miles) of rainforest were destroyed from August 2019 to July 2020. This is a 9.5% increase from the previous year. The Amazon is a vital carbon store that slows down the pace of global warming. Scientists say it has suffered losses at an accelerated rate since Jair Bolsonaro took office in January 2019. The Brazilian president has encouraged agriculture and mining activities in the world's largest rainforest." (credits: BBC)
Output 1 should be - **environment**
Output 2 (can be more liberal and can contain topics other than those present in the given set (A) of topics) - environment, global warming, rainforests, Brazil, etc.
**Text 2 ->**
"Elon Musk is developing a vehicle that could be a game-changer for space travel. Starship, as it's known, will be a fully reusable transport system capable of carrying up to 100 people to the Red Planet. The founding ethos of Elon Musk's private spaceflight company SpaceX was to make life multi-planetary. This is partly motivated by existential threats such as an asteroid strike big enough to wipe out humanity." (credits: BBC)
Output 1 should be - **space technology**
Output 2 (can be more liberal and can contain topics other than those present in the given set (A) of topics) - space technology, science, technology, Elon Musk, space, etc.
What can be the different approaches to deal with the above problem, get both output 1 and 2, and costs associated with them.
PS. I'm new to this area of learning, so please be liberal with your advice and forgive any mistakes that I could have made while asking the question. | 0.86 | t3_qpby7i | 1,636,372,096 |
LanguageTechnology | About to apply for a Master's degree in Computational Linguistics; in want of information from current or former students (especially from Saarland, Tubingen and Stuttgart) | Hi everyone,
I'm about to complete my bachelor's degree in English studies (I'm in third year, Western Europe), and I have to apply for a Master's degree this year. Alongside my studies, it's now been four years since I've started working as a translator, specialized in localization, and I've had the opportunity to work regularly with famous video games companies and translate a variety of content.
I first had in mind to apply for a translation Master's degree, but as I already have had a peek at the translation industry by working, I'd like to broaden my skills so as to get better opportunities in the future as well as career development prospects, since I don't see myself having the same job during all my life.
One of the classes that I appreciate the most where I study, aside from translation, is linguistics. Moreover, I've always had a genuine interest in computing, and even though I'm only doing web development stuff (HTML/CSS/JS), I'm willing to learn other languages and develop my skills in this field.
Now, with those two variables in the equation, I think computational linguistics could be a great opportunity for me, as it mixes two of my biggest interests and is still a relevant field with regard to the translation industry.
One of my biggest flaws is maths: it's been more than five years now that I've stopped doing maths, because I didn't need it during my studies. I've seen that some universities in Western Europe accepted students coming from a linguistics background and offered optional courses for such students. From what I've seen, these universities are generally located in Germany, namely Saarland, Tubingen and Stuttgart.
As far as I'm concerned, Germany would be the best choice as, even though I do not speak German, the country is contiguous to where I live and has extremely low fees compared to other universities, such as the University of Edinburgh, or University of Washington in Seattle. Now, here are some specific questions I'd like to ask to current or former students of these German universities:
— as someone who has few programming experience but is willing to learn, which university would be the best choice?
— how much math knowledge is required? Just enough for programming or more?
— how many hours of classes are there on average per week, and does the general schedule allows one to have a job alongside one's studies? To take my own example, where I am, I have about 20 hours of classes per week, about 10 hours of work at home for the university, and 10 to 15 hours of real work (translation).
Obviously, I'd also love to hear the answers of people not coming from these universities — I've taken those as examples because I've heard of them the most on the Internet, but feel free to talk about your own path, it may give me ideas!
Thank you much for reading! | 0.94 | t3_qoomq5 | 1,636,292,096 |
LanguageTechnology | FLAN: Fine-tuned LAnguage Nets | nan | 1 | t3_qoh2dj | 1,636,259,072 |
LanguageTechnology | One sentence highlight for every EMNLP-2021 Paper | Here is the list of all EMNLP 2021 (Empirical Methods in Natural Language Processing) papers, and a one sentence highlight for each of them. EMNLP 2021 is to be held both online and in Punta Cana, Dominican Republic from Nov 07 2021.
[https://www.paperdigest.org/2021/11/emnlp-2021-highlights](https://www.paperdigest.org/2021/11/emnlp-2021-highlights) | 1 | t3_qodet2 | 1,636,246,016 |
LanguageTechnology | Quoting in pandas | Can anyone please explain what is quoting=n, when reading a pandas data frame
I got this solution on stack overflow when trying to solve eof error but I don't understand why | 0.67 | t3_qo82t8 | 1,636,228,992 |
LanguageTechnology | finding all ngrams given specific (n-1)gram in nltk | Im struggling to find a efficient method for what seems, conceptually, to be a fairly simple task. I want to take a given trigram and look for all 4-grams in my text that contain that specific trigram. Eventually i want to do this recursively, which I feel like shouldnt be computationally intensive, but I'm struggling to find options using the tokenized vocabulary and corpus rather than having to constantly go back to strings. | 1 | t3_qo3spl | 1,636,216,064 |
LanguageTechnology | Word senses clustering with state-of-the-art models? | Hi everyone
I'm a CS student trying to study and research on a specific topic for my AI class. I'm literally new to this field but done some searches about the topic.
As the Header says, I'm trying to semantically cluster polysemous words or word with different meanings in a corpus.
my input is: a corpus
the output I want is: clustering of different meanings of K frequent words with their semantical synonyms; e.g.: suppose word "cell" is 1000 time frequent in corpus but with different meanings, like the sentence " *There are many organelles in a biological* ***cell*** " the cell here is related semantically to biological stuff or the sentence " *He went to prison* ***cell*** " cell here means prison or we mean mobile for cell in "cell phone", so we have some clusters of cell with their synonyms.
Finding the K frequent words is kind of preprocessing and can be done easily.
For the clustering part I searched for related papers, there was a wordnet that seems to be similar!
Also there are some literature word embeddings like Glove, FastText, Word2vec, Bert, Elmo (which is contextualized and seems to be helpful) that can propose similar vectors, The vectors with the highest percentage of similarity will be selected.
The thing is most words have multiple senses and as I said explained above each meaning of word is contextualized to the correspondent sentence. I thought that would be cool if we make a BERT vector (e.g. cell as in cell phone) of one of the K frequent words and compare it with other sentences in our corpus. (that's actually my first intuition but not sure about under the hood) so we would have clusters of polysemous words with their semantically similar meanings in a cluster, plus keeping their correspondent sentences as an example for later use.
I'm not sure If this is the right way to do it or not! but I'm asking my question here, to get another intuitions if possible or to know other more accurate and popular techniques in the field.
thanks for your time.
any information would be helpful. | 0.67 | t3_qnxzlt | 1,636,196,480 |
LanguageTechnology | Google AI Introduces ‘GoEmotions’: An NLP Dataset for Fine-Grained Emotion Classification | The emotions one experiences daily can motivate them to act and influence the significant and minor decisions they make in their lives. Therefore, they greatly influence how people socialize and form connections.
Communication helps us to express a vast range of delicate and complicated emotions with only a few words. With recent advancements in NLP, several datasets for language-based emotion categorization have been made accessible. The majority of them focus on specific genres (news headlines, movie subtitles, and even fairy tales) and the six primary emotions (anger, surprise, disgust, joy, fear, and sadness). There is, therefore, a need for a larger-scale dataset covering a greater range of emotions to allow for a broader range of possible future applications.
A recent Google study introduces [GoEmotions](https://arxiv.org/pdf/2005.00547.pdf): a human-annotated dataset of fine-grained emotions with 58k Reddit comments taken from major English-language subreddits and 27 emotion categories identified. It has 12 positive, 11 negatives, 4 ambiguous emotion categories, and 1 “neutral” emotion category, making it broadly useful for conversation interpretation tasks that demand delicate discrimination between emotion displays. They also demonstrate a full tutorial that shows how to use GoEmotions to train a neural model architecture and apply it to recommending emojis based on conversational text.
# [Quick Read](https://www.marktechpost.com/2021/11/05/google-ai-introduces-goemotions-an-nlp-dataset-for-fine-grained-emotion-classification/) | [Paper](https://arxiv.org/pdf/2005.00547.pdf)| [Google Blog](https://ai.googleblog.com/2021/10/goemotions-dataset-for-fine-grained.html) | 0.94 | t3_qnlhbn | 1,636,148,864 |
LanguageTechnology | Identify Scenarios/Topics from dataset | Hi Guys!
I have the following use case: I have a Dataset containing roughly 100 sentences which are describing certain components of a multi component system.
I am interested to identify which sentence is describing which component in this system. I know that I can use a Topic modeling algortihm like LDA to find topics for each sentence in the dataset.
The problem is, from what I know LDA does not regard context. The difficulty for my specific case, is that there are certain sentences in my dataset that only have semantic value when the context is known.
I think its better to showcase an example to illustrate what I mean lol
Lets assume hypothetically that my Dataset contains 100 sentences describing the various components of a Computer, Like CPU, GPU, Motherboard, ect.
and these two sentences are part of the Dataset:
* The GPU is manufactured by ASUS
* it has 12GB Memory
So we can see that the first sentence is talking about the component GPU and the algorithm should identify this sentence as GPU Topic, the second sentence is obviously also talking about the GPU if we look at the context (not a problem for us humans), but if we look at the sentence on its own, it would be impossible to say that the algorithm should also classifiy this as GPU topic. So the algorithm should somehow understand that this sentence in its own row in the dataset belongs together with the sentence is the row before inside the dataset and classify it into the same topic GPU.
​
So my question is, what is the best way to solve this issue, apart from manually letting a human look over the dataset and join rows together ? | 0.72 | t3_qnfrw7 | 1,636,132,352 |
LanguageTechnology | Using NLP way to identify controversial topics? | Hi all,
I’m a psychology researcher, and am interested in the prospect of using nlp and topic modelling to find potential controversial topics in online forums (such as here on Reddit). Would there be any particular techniques in nlp (sentiment analysis etc) that could be used to do this?
Thank you in advance. | 0.9 | t3_qn9es3 | 1,636,113,024 |
LanguageTechnology | Flattening / neutralizing emotion in text | Hi all! I'm working on a research process looking to do entailment for fact checking, and one of the things I want to experiment with is modifying emotional words from text. The models seem to rely too much on emotion to make their classifications, so I want to take that away from them and see how they perform. Some examples might be:
"I hated the movie, it was terrible. And I loathe the actor." --> "I disliked the movie, it was bad. And I dislike the actor."
I imagine it would be a lexicon-based approach, and not as simple at all as that example, but I'm curious if anyone has head of anything along these lines.
Thanks! | 1 | t3_qmqqsm | 1,636,049,408 |
LanguageTechnology | Context and Resources to apply NLP to source code | Hello I am new here and I am a 3rd year data science major looking to work on a personal project regarding applying NLP to identify/classify vulnerabilities in source code (c++, c). Given that I am new to this game, I would be much obliged if more experienced folk could refer me to some resources using code as texts for NLP. I am having trouble finding resources for this myself aside from the odd research paper w/o code :( . | 1 | t3_qmne6s | 1,636,040,576 |
LanguageTechnology | Multilingual sentence vectors for 50+ languages | Hey everyone, I wrote a pretty long article covering [multilingual sentence transformers](https://www.pinecone.io/learn/multilingual-transformers/), including how to build our own. It's super interesting imo and I focused on something called 'multilingual knowledge distillation' by Nils Reimers and Iryna Gurevych, which has been used to build sentence transformers that work with 50+ languages - which is incredible to me!
It's really useful for low-resource languages as it just requires translation pairs (like English-to-<insert language here>) and doesn't need that much data either.
Anyway, I hope it's useful - let me know what you think, thanks! | 0.95 | t3_qmlw1a | 1,636,036,352 |
LanguageTechnology | Have any one used sentence Bert embedding for sentiment analysis ? | Not sure if it is feasible to use sentence embedding to do few shot sentiment analysis? | 0.75 | t3_qmllgs | 1,636,035,456 |
LanguageTechnology | Speech Emotion Recognition | What are some of the state of the art speech emotion recognition architectures/alghorithms? | 1 | t3_qmihzw | 1,636,025,344 |
LanguageTechnology | Real life needs for NLP | Hello,
I have a question, what are real life example and motivation to make use of NLP services (text summarization, sentiment analysis,etc.)??
Is there a need to NLP tasks in real life financial domain, like for banks, insurance… and healthcare domain too
Thank you | 1 | t3_qmgu8s | 1,636,018,176 |
LanguageTechnology | What method to use for Out-of-distrubution detection? | I have a stream of log data from users. There are some comments from users that I would like to classify as distinct from others (only 1 class). This seems to me like an OOD problem where 99% percent of the data could be whatever (i.e normal language) and 1% of them belong to a certain class. Has anyone worked on a similar problem or has any good ideas/papers that I should try implementing? | 1 | t3_qmgjlz | 1,636,016,768 |
LanguageTechnology | HuBERT: How to Apply BERT to Speech, Visually Explained | nan | 0.98 | t3_qmgm0v | 1,636,017,024 |
LanguageTechnology | Why one of the features is dominating all rest of the features in my trained SVM? | I have been given a task to train the SVM model on [conll2003 dataset](https://huggingface.co/datasets/conll2003) for Named Entity "Identification" (That is I have to tag all tokens in "Statue of Liberty" as named entities not as a place, which is the case in named entity recognition.)
Initially, I built a feature `first_letter_caps` which returned `1` if the first letter of the token was capital else `0`. This resulted in the first token of every sentence always getting identified as a named entity. So, I changed it to do that only for non-sentence-start-token and always return `0` for sentence-start-token (that is, for the first word in the sentence). This resulted in the first token of every sentence always getting identified as a NON-named entity. So I realized that somehow I have to "turn off" this feature for sentence-start-token and not return a fixed value. So, I made this feature return logical OR of other features (explained in next paragraph) for sentence-start-token, thinking that this will have the effect of turning off this feature and this turned out to be true. And this was quite successful. It stopped "always" identifying sentence-start-token as either named entity or non-named-entity.
But now, I have a few issues. But let me explain other features first. To avoid "The" in "The Federal Bank" getting identified as a named entity, I built feature `is_stopword` which returns `1` when the token is stopword else return `0`. Also, I have a third feature `contains_number` which returns `1` if the token contained number in it, else returns `0`.
I have trained `sklearn.svm.SVC` with linear kernel. But it never identified tokens containing numbers as named entities. And if stop words had first letter capital (like in "The"), it will classify it as named entity. After outputting [`SVC.coef_`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC.coef_) I realized that the issue is that it assigns positive coefficient only to feature `first_letter_caps` and negative or zero coefficient to all other features. When I plotted feature comparison, I realized that it is using only `first_letter_caps` feature for decision making:
[https://i.stack.imgur.com/HVzvrm.png](https://i.stack.imgur.com/HVzvrm.png)
Somehow the feature `first_letter_caps` is dominating all other features and SVM decision boundary. How do I fix this? What I am missing? | 1 | t3_qm7n5r | 1,635,983,232 |
LanguageTechnology | Stanza not tokenising sentences as expected | Hello.
I am trying to pre-process my text data for a word alignment task.
I have a text file of sentences. Each sentence is on a new line:
a man in an orange hat starring at something .
a boston terrier is running on lush green grass in front of a white fence .
a girl in karate uniform breaking a stick with a front kick .
five people wearing winter jackets and helmets stand in the snow , with snowmobiles in the background .
people are fixing the roof of a house .
a man in light colored clothing photographs a group of men wearing dark suits and hats standing around a woman dressed in a strapless gown .
I am using [Stanza](https://stanfordnlp.github.io/stanza/) to tokenise the sentences:
en_token = []
for i, sentence in enumerate(doc_en.sentences):
list_of_tokens = [sent.text for sent in sentence.tokens]
en_token.append(list_of_tokens)
My expected output is:
[["a", "man", "in", "an", "orange", "hat", "starring", "at", "something", "."], ["a", "boston", "terrier", "is", "running", "on", "lush", "green", "grass", "in", "front", "of", "a", "white", "fence", "."],
["a", "girl", "in", "karate", "uniform", "breaking", "a", "stick", "with", "a", "front", "kick", "."],
["five", "people", "wearing", "winter", "jackets", "and", "helmets", "stand", "in", "the", "snow", ",", "with", "snowmobiles", "in", "the", "background", "."],
["people", "are", "fixing", "the", "roof", "of", "a", "house", "."],
["a", "man", "in", "light", "colored", "clothing", "photographs", "a", "group", "of", "men", "wearing", "dark", "suits", "and", "hats", "standing", "around", "a", "woman", "dressed", "in", "a", "strapless", "gown", "."]]
Essentially, a list of lists, with each sentence in its own list and its words tokenised.
However, the output that I get is this:
[["a", "man", "in", "an", "orange", "hat", "starring", "at", "something", "."], ["a", "boston", "terrier", "is", "running", "on", "lush", "green", "grass", "in", "front", "of", "a", "white", "fence", "."],
["a", "girl", "in", "karate", "uniform", "breaking", "a", "stick", "with", "a", "front", "kick", ".", "five", "people", "wearing", "winter", "jackets", "and", "helmets", "stand", "in", "the", "snow", ",", "with", "snowmobiles", "in", "the", "background", ".", "people", "are", "fixing", "the", "roof", "of", "a", "house", "."],
["a", "man", "in", "light", "colored", "clothing", "photographs", "a", "group", "of", "men", "wearing", "dark", "suits", "and", "hats", "standing", "around", "a", "woman", "dressed", "in", "a", "strapless", "gown", "."]]
Stanza appears to be ignoring sentence boundaries in certain instances.
Would anyone know how to remedy this?
Since each sentence begins with a newline character, would it be possible to simply force a new list at every newline character and then perform word tokenisation? If yes, how would I do that?
Thank you in advance for any help and advice. | 0.84 | t3_qm6j8c | 1,635,980,032 |
LanguageTechnology | Can open-domain QA models handle yes-no questions? | My understanding of open-domain QA is that it receives a question and must retrieve the evidence passage and the appropriate answer within that passage.
Can such models handle yes-no questions? I'm just curious because "yes" and "no" aren't really things you find in, for example, Wikipedia passages. | 0.95 | t3_qlums5 | 1,635,946,624 |
LanguageTechnology | Wav2CLIP: Connecting Text, Images, and Audio | nan | 0.83 | t3_qlkgdw | 1,635,905,792 |
LanguageTechnology | Tool for normalizing abbreviations? | Hello all,
I need to process a text and I'm looking for a Python tool able to transform abbreviations into their standard forms - for example, from "I'm" to "I am". I could do it by using regex, but I need to save time.
Does anyone know if there exist something like this, or at least a list of abbreviations that could be of use? Thank you in advance! | 1 | t3_ql97x7 | 1,635,873,536 |
LanguageTechnology | Scientific Literature Review generation | Hello everyone,
I've developed an algorithm to automatically generate a literature review : [https://www.naimai.fr](https://www.naimai.fr/)
Hopefully that could be useful for the PhDs (and the non PhDs) !
For those curious to understand how it works : [https://yaassinekaddi.medium.com/scientific-literature-review-generation-386f36b05eae](https://yaassinekaddi.medium.com/scientific-literature-review-generation-386f36b05eae)
I'll be thankful if you have any remarks about that :)
Cheers, | 1 | t3_ql2ofp | 1,635,854,592 |
LanguageTechnology | Good stopwords list for sentiment analysis | Does anyone know of a good stop words list for sentiment analysis pre processing?
&#x200B;
I'm trying to avoid removing words like 'can't', 'won't', 'no' etc. | 1 | t3_ql1m42 | 1,635,850,496 |
LanguageTechnology | Top 10 Named Entity Recognition (NER) API | nan | 0.8 | t3_ql1813 | 1,635,848,832 |
LanguageTechnology | WinkNLP, a developer friendly NLP | See how winkNLP processes text as you type it! POS tagging, entity recognition and sentiment analysis all rolled into one simple package!
[https://winkjs.org/showcase-wiz/](https://winkjs.org/showcase-wiz/) | 0.93 | t3_qkxdfz | 1,635,831,296 |
LanguageTechnology | Any movie dataset with movie summaries? | Do you know of a dataset that contains movie summaries?
Do you know if researchers are legally allowed to download IMDB movie summaries for research purposes? | 0.76 | t3_qkvojb | 1,635,824,896 |
LanguageTechnology | Why do various balancing techniques yield no improvement in NLP tasks? | I have been given a task to train the SVM model on conll2003 dataset for Named Entity "Identification" (That is I have to tag all tokens in "Statue of Liberty" as named entities not as a place, which is the case in named entity recognition.)
I have built several features and was able to improve the performance. Now the task asks to deal with imbalanced data. I have tried several techniques, oversampling, undersampling, SMOTE and undersampling using nearmiss. But surprisingly, I got exactly the same F1 score as without doing anything to deal with unbalanced data. I felt that I am doing something fishy and had done some stupid mistake. But now I feel that is not the case and I miss some subtle understanding.
Can you please share insight exactly why such balancing techniques have no effect? Also, is it text data or SVM, in the context of which such techniques don't have any effect? Any details / links?
PS: The task specifically asks to use SVM and not any other model. | 0.83 | t3_qkosr8 | 1,635,803,520 |
LanguageTechnology | How to format input for NLTK IBM alignment models? | Hello.
I have a bunch of parallel data (>2.000.000 characters per language) in English-German and English-French that I need to word-align.
I intend to use NLTK's implementation of the [IBM alignment models](https://www.nltk.org/api/nltk.translate.ibm_model.html).
Based on the [documentation](https://www.nltk.org/api/nltk.translate.ibm1.html), it appears that the module input needs to be two lists of tokenised data. E.g., `(["I", "am", "going", "to", "the", "cinema", "."], ["Je", "viens", "au", "cinéma", "."])`
I have text files with about 34,000 lines of parallel sentences for English-German and English-French.
How can I process them to be able to input them into the module?
It is easy enough to tokenise the data in one language and place it into a list, but I am not sure how to create two separate lists for the input data.
Essentially what I have is:
EN.txt = '''The dog jumps.
The cow eats.
The fish swims.'''
FR.txt = '''Le chien saut.
La vache mange.
Le poisson nage.'''
And what I need to get is:
(["The", "dog", "jumps", "."], ["Le", "chien", "saut", "."])
(["The", "cow", "eats", "."], ["La", "vache", "mange", "."])
(["The", "fish", "swims", "."], ["Le", "poisson", "nage", "."])
If I have not explained myself well enough, please let me know.
Thank you in advance for your help. | 1 | t3_qkib1j | 1,635,785,728 |
LanguageTechnology | Time Complexity of Transformers (and RNNs and ConvNets) | I was watching the guest lecture by the authors of the original Transformers for Stanford's CS224n course on NLP \[[Link-YouTube](https://youtu.be/5vcj8kSwBCY)\] in which they talk about how Transformers perform much faster than the traditional RNN and ConvNet models *if the sequence length is orders of magnitude smaller than the model dimension which is usually the case*. They also had this slide on the time complexities of different models \[[Link-Image](https://ibb.co/2gC4Rzw)\]. My question is that shouldn't the compute time be independent of sequence length for ConvNets and Transformers since they can be parallelized (while training). And even while testing, can you explain from where did the length^(2) term come for the Transformers? Thanks! | 1 | t3_qk7hgj | 1,635,745,536 |
LanguageTechnology | How can I balance sentence data for NLP tasks | I have been given a task to train the SVM model on [conll2003 dataset](https://huggingface.co/datasets/conll2003) for Named Entity "Identification" (That is I have to tag all tokens in "Statue of Liberty" as named entities not as a place, which is the case in named entity recognition.)
I am building features which involve multiple tokens in sequence to determine whether token at particular position in that sequence is named entity or not. That is, I am building features that use surrounding tokens to determine whether a token is named entity or not. So as you have guessed there is relation between these tokens.
Now the data is very imbalanced. That is there are far more non-named entities than named entities and I wish to fix this. But I cannot simply oversample / undersample tokens randomly as it may result non-sensical sentences due loss of relation between tokens.
I am unable to guess how I can use [other balancing techniques](https://www.kaggle.com/rafjaa/resampling-strategies-for-imbalanced-datasets) like tomek links, SMOTE for such sentence data (that is without making sentences sound meaningless).
So what are best / preferred techniques to balance such data? | 0.88 | t3_qk0t52 | 1,635,720,960 |
LanguageTechnology | Best way to store BERT embeddings on AWS? | I'm using sentence-transformers to generate 768 vector embeddings. I was previously saving these in Postgres on RDS as DOUBLE[], which works great. I'm looking to scale MLOps, and Sagemaker tooling seems pretty S3 heavy. I'm also looking to move towards Serverless Aurora, which has a 1mb read limit - so my current psql setup wont do. And also, I'd love all that data pipeline / feature store / step caching functionality built around S3.
Let's say one user has any number of embeddings, realtime read/writing those. I don't think saving each embedding as a single CSV is the way. If I want to read all of a user's embeddings, multiple single CVSs seems wasteful. A big CSV doesn't seem real-time write safe. I'm new to Parquet, and keep seeing it mentioned. Is it pretty real-time friendly? Are there other solutions? | 1 | t3_qjyy3n | 1,635,714,944 |
LanguageTechnology | Bert embedding NLP | We are working on an NLP project using a Universal Dependencies Tamil Tree Bank. The following is the preprocessed data frame where the column Form is to be word embedded using BERT. Since the column is already tokenized only word embedding is left and all examples we came across are taking raw text data and tokenizing using Bert.
So I just wanted to know whether a way to word embed the column using Bert was possible.
I have attached a snippet of the preprocessed data in the chat. | 0.92 | t3_qjsaxj | 1,635,695,488 |
LanguageTechnology | How can I use features POS tags and chunk ids to train model when the input test sentence wont have them | I have been given a task to train the SVM model on [conll2003 dataset](https://huggingface.co/datasets/conll2003) for Named Entity "Identification" (That is I have to tag all tokens in "Statue of Liberty" as named entities not as a place, which is the case in named entity recognition.) Conll2003 dataset contains part of speech tags and chunk IDs for each token. We used them to train the SVM model. We can also find models' performance against test and validation datasets as both of them also contain part of speech tags and chunk IDs for each token. But what if someone simply inputs some random sentence (without pos tags and chunk IDs) for predicting (as it is out of test dataset)? How should we handle this? Should we altogether avoid these features while training? Or "somehow" generate these features for input sentence before feeding it to the model for prediction? If yes, then how this generation is usually done? Also what is the standard approach? | 1 | t3_qjppb9 | 1,635,687,552 |
LanguageTechnology | [P] “Abstractified Multi-instance Learning (AMIL) for Biomedical Relation Extraction” | nan | 1 | t3_qjbmll | 1,635,631,232 |
LanguageTechnology | Per-sentence readability metrics | Wondering if anyone has come across any text readability metrics that work on a per-sentence basis? I’ve come across - and used- several that work on a full-text basis, as in telling me the readability of corpus X, but none that will tell me the readability ofeach sentence x in X. | 1 | t3_qj81b3 | 1,635,620,224 |
LanguageTechnology | Suggestions on how to classify paragraphs in fiction books to a set of genres | Hello everyone! I am new to NLP, and am working on a project where we have to classify fiction books either at paragraph or chapter level into a set of genres (we're keeping a set of 5 main labels like 'romance', 'suspense', 'adventure', 'tragedy', 'comedy') and sub-labels within each main label.
We are using books available from Project Gutenberg, and have some paragraph/chapter breaks ready. However, there are no genre annotations, so based on my background study, I have the following ideas/conclusions:
This seems like a task between text classification and sentiment analysis. I found that text classification seemed to rely a lot on some special seed/key words which may not be the best approach when trying to understand context in a fiction book. Hence I am leaning towards sentiment analysis methods which take into account context, but we do lack labeled data here.
- For an unsupervised technique, I am thinking to start with LDA, and then try to manually match the outputted topics to our main set of genres. I fear this approach would lack capturing context from the text.
- For an unsupervised technique, I have found a paper 'Contextualized Weak Supervision for Text Classification'. I have to try and see how this will fare.
- I will try to annotate some books in order to try some supervised methods, but want to keep this as a backup option since it would be a monumental task.
Do you think I am headed in the right direction? I would appreciate any and all suggestions! Thank you so much. | 1 | t3_qj4kjx | 1,635,609,984 |
LanguageTechnology | Building a Grammar Model | I'm learning an inflected language, and I would like to build a grammar model to check self with.
I have a corpus of sentences with grammatical tags (POS, case, conjugation, etc.). I'm specifically looking for something that will check if nouns and verbs are correctly cased/conjugated.
Is there an automated tool that could build syntax trees from the corpus, and then check my sentences against them? | 0.76 | t3_qj0pya | 1,635,597,824 |
LanguageTechnology | The Obscenity List - Free Dataset of Profanities | nan | 0.81 | t3_qipnx7 | 1,635,552,256 |
LanguageTechnology | How to Approach [NLP]: Classification of partial sentences (or words) | How to approach this problem:
Suppose we have partially completed sentences (or words), and their corresponding labels. How to do classification of them.
Example: Suppose we have to predict the category of sentence in App Store or Play Store.
`Text Label`
`"instagram" -> social`
`"inst" -> social`
`"whatsapp" -> communication`
`"wha" -> communication`
"instagram" is a full word but "inst" is a partial word. "whatsapp" is a full word but "wha" is a partial word. | 1 | t3_qiuw40 | 1,635,572,096 |
LanguageTechnology | How can we assign sentiment score to preprocessed words? | I'm currently implementing a domain based sentiment dictionary. And couldn't find a way to assign sentiment score to the preprocessed words. If anyone could give an advice that would be great.
Thank you for your kind replies. | 0.67 | t3_qij7y9 | 1,635,532,288 |
LanguageTechnology | Apple AI Researchers Propose ‘Plan-then-Generate’ (PlanGen) Framework To Improve The Controllability Of Neural Data-To-Text Models | In recent years, developments in neural networks have led to the advance of data-to-text generation. However, their inability to control structure can be limiting when applied to real-world applications requiring more specific formatting.
Researchers from Apple and the University of Cambridge propose a novel [Plan-then-Generate (PlanGen)](https://arxiv.org/pdf/2108.13740.pdf) framework to improve the controllability of neural data-to-text models. PlanGen consists of two components: a content planner and a sequence generator. The content planner starts by first predicting the most likely plan that their output will follow. Thereafter, the sequence generator generates results using the data and content plan as input.
# [Quick Read](https://www.marktechpost.com/2021/10/28/apple-ai-researchers-propose-plan-then-generate-plangen-framework-to-improve-the-controllability-of-neural-data-to-text-models/) | [Paper](https://arxiv.org/pdf/2108.13740.pdf) | [Github](https://github.com/yxuansu/plangen) | [Dataset](https://github.com/google-research-datasets/ToTTo) | 0.88 | t3_qhxg5f | 1,635,457,664 |
LanguageTechnology | Using Blenderbot w/o ParlAI | Hi all,
I'm really new to the field of NLP and deep learning in general (have never used torch before haha). I wanted to know how one would go about getting Blenderbot to run independent of ParlAI, or at the very least, create a script to run the bot using ParlAI's library. I have managed to download the model files (in .tgz), but am not sure exactly how to go about that task.
I've looked at ParlAI's scripting section but it was not clear to me how to go about incorporating one of their models. Ideally, I'd want to be able to write a method that takes in a string input and produces a string output through blenderbot. If you have any experience in this or advice, it'd be greatly appreciated! Thanks! | 0.75 | t3_qht3vz | 1,635,445,120 |