sub
stringclasses
4 values
title
stringlengths
3
304
selftext
stringlengths
3
30k
upvote_ratio
float32
0.07
1
id
stringlengths
9
9
created_utc
float32
1.6B
1.65B
LanguageTechnology
Extracting topics from blog titles
Hey. I have a pet project which is a blog aggregator. What I want to do, it to extract topics for each blog title, for example: "Using Documentation-Driven Design to Guide API Decisions" - API, Documentation "Measuring Web Performance at Airbnb" - Web-development, Software Engineering "How Spotify Uses ML to Create the Future of Personalization" - Machine Learning, Personalization I have zero background in ML and NLP. Can someone please suggest, where should I look? I have tried gensim, but it was hard for me. Maybe I need some background reading to start with it? My initial thoughts were that I can train a model from Reddit titles from different subreddits, and then use it.
1
t3_rdy9o1
1,639,223,680
LanguageTechnology
Training text-generating models locally
Hi all. I am teasing with the idea of buying some hardware to train language models that can generate text. Would the RTX 3090, for example, be an appropriate GPU for this use case given that it has 24GB of VRAM? Assuming a modest dataset with \~100k samples for fine-tuning, would I be able to train and test models reasonably well? Obviously GPT-3 is out of the question as I'd probably need like 20 3090's so my question is more geared towards the "2nd tier" models like GPT-2, T5, or BART. I have quite a bit of experience with fine-tuning encoder transformer models like BERT and have ran into memory issues with those in the past and I know that the aforementioned models listed are even bulkier so I'm a little cautious. Does anyone have any experience with fine-tuning these kinds of models? Thanks.
0.5
t3_rdq5rp
1,639,192,320
LanguageTechnology
Universal Grammar (1) | Noam Chomsky
nan
0.88
t3_rdpsw4
1,639,191,168
LanguageTechnology
Can someone suggest some good NLP resources?
I'm just starting out in NLP. I find it really intriguing and want to learn more about it and get some hands on. I'd be glad if y'all can suggest some videos, books, courses, etc!
0.67
t3_rdks94
1,639,176,192
LanguageTechnology
How Good is Your Chatbot? An Introduction to Perplexity in NLP
nan
0.9
t3_rdjtp8
1,639,173,632
LanguageTechnology
A New DeepMind Research Studies Language Modeling At Scale
Language is an essential human component because of its role in demonstrating and promoting comprehension – or intellect. It allows people to express ideas, create memories, and foster mutual understanding by allowing them to share their thoughts and concepts.  The research and study of more sophisticated language models — systems that predict and generate text – has enormous potential for developing advanced AI systems. This includes systems that can securely and efficiently summarise information, provide expert advice, and follow directions using natural language. Research on the possible impacts of language models and the risks they entail are required before they can be developed. This includes working together with experts from many fields to foresee and fix the problems that training algorithms on current datasets can cause. Quick Read: https://www.marktechpost.com/2021/12/10/a-new-deepmind-research-studies-language-modeling-at-scale/ Paper 1: https://storage.googleapis.com/deepmind-media/research/language-research/Training%20Gopher.pdf Paper 2: https://arxiv.org/abs/2112.04359 Paper 3: https://arxiv.org/abs/2112.04426
0.75
t3_rdedez
1,639,158,912
LanguageTechnology
What is the task(s) called if I want to generate questions automatically based on a document?
**Input:** I have a bunch of documents which contains the introduction of a company and some text about information requests like: investors must provide the details of their income sources. **Desired output:** generate questions automatically like: What is your income sources? I have found some potential open sources solutions like [https://github.com/ramsrigouthamg/Questgen.ai](https://github.com/ramsrigouthamg/Questgen.ai) Just want to ask if guys have any idea on this problem. Or what keyword should I try to research for? The problem seems to consist of two tasks: 1. identify and extract the desired text from the input document that can be used to generate questions. **I am not sure what is this task called in NLP so don't know where to start my search in the literature.** 2. generate questions with the extracted text. **It is certainly a text generation task, but it's not a common task like text summarization or translation. I am not sure if there is a more specific name/keyword for this task**
0.76
t3_rdd1d0
1,639,155,456
LanguageTechnology
Determining subject company (listed stocks) referred to in many short text samples
I have a large amount of unlabeled data samples (under 100 words in each sample) where mostly each sample refers to a specific company - which I need to determine; The company can be referred to by stock symbol or name. My best attempt so far involves parsing out stock symbols using regular expressions, searching a database, and calculating relative Levenshtein distances of the search results vs sample text to make a best guess; this is a bit over 60% accurate - in ideal cases. I have two main issues that I can see: 1. I am getting some false positives in cases where the symbol actually matches, but it's the wrong company (maybe on a different exchange). 2. In cases where no stock symbol is specified (just a company name), I am getting no results, as I don't currently handle just company names. **For issue 1 - The False Positives** The database I am searching against also contains company descriptions or titles for each search result. What would be the best way of comparing the company descriptions of each search result with the sample text to get a more accurate guess? I am thinking some sort of keyword comparison here would help - I know that factoring in the context of the sample text is critical here. **For issue 2 - No Stock Symbol** I think the best candidate for this case would be to leverage token classification to find "ORG" entities; I have tried this with a few pre-trained models from HuggingFace, but haven't had great results. Can anyone recommend a model that is pre-trained on financial data - or would even just work well for recognizing company names? **In addition to this**, would anyone know of a good dataset or strategy for further training the model for this purpose? If anyone has an alternate suggestion for issue 2, I would also be open to that. Note that I am relatively new to machine learning, but I do understand the basics of how transformer models work, how to use them, and the different types of classification problems.
1
t3_rdbpio
1,639,151,872
LanguageTechnology
Increasing the Accuracy of Textual Data Analysis on a Corpus of 2 Billion Words
[https://engineering.soroco.com/increasing-the-accuracy-of-textual-data-analysis-on-a-corpus-of-2000000000-words-part-1/](https://engineering.soroco.com/increasing-the-accuracy-of-textual-data-analysis-on-a-corpus-of-2000000000-words-part-1/) At Soroco, we ingest between 200 million and 2 billion words over the course of model training and analysis for a single team of workers using our Scout product. In this blog post, I talk about some tips and tricks that we might use to increase the accuracy of our models, including appropriate processing of text for the purpose of leveraging standard techniques from machine learning. I then demonstrate this by showing how to represent text in a high-dimensional vector space with applications to a toy regression problem.
1
t3_rd21rx
1,639,116,800
LanguageTechnology
5 Text Decoding Techniques that every “NLP Enthusiast” Must Know
nan
0.78
t3_rd3geu
1,639,122,176
LanguageTechnology
The Toxicity Dataset — building the world's largest free dataset of online toxicity
nan
0.87
t3_rcov8z
1,639,077,376
LanguageTechnology
Tips about building a chatbot with GPT-3 or GPT-J
Hello! I realize I have more and more questions from people trying to leverage GPT-3 or GPT-J for their next chatbot. And usually questions are always about 2 things: * How to format my requests so the model understands that I am in conversational mode? * How can the model keep an history of my conversation? I'm answering these 2 points in this quick article: [https://nlpcloud.io/how-to-build-chatbot-gpt-3-gpt-j.html](https://nlpcloud.io/how-to-build-chatbot-gpt-3-gpt-j.html?utm_source=reddit&utm_campaign=k431103c-ed8e-11eb-ba80-5242ac130007) I hope it will help! I any question please don't hesitate to ask.
0.86
t3_rckzpj
1,639,066,624
LanguageTechnology
Best way to vectorize names of medical conditions/diseases?
Let's suppose the aim is to predict, let's say, hospital charges incurred (there are other predictor parameters too). I have thought of the following ways of vectorization so far- 1. I don't think using word2vec makes a lot of sense because the similarity of words is meaningless here? 2. Find a huge medical corpus online and make a count vectorizer matrix for each row of medical condition. But that would mean the matrix is too sparse. 3. Use only the medical conditions in the dataset as corpus and make a count vectorizer matrix from them? 4. Pick only the top few select hundred words and use them as a corpus ​ If there's any other way you can think of, do let me know. I accept I don't know much about NLP.
0.79
t3_rchqz7
1,639,056,768
LanguageTechnology
open source sentence rephrasing
I have numerical data. I can come up with basic sentence (eg you credit score is good). i want to make this response seem natural and not bot like. ie the response varies everytime, doesnt change the meaning and sounds human. what is the best technology available ? is NLP cloud's paraphrasing a good fit or are there similar/better services ?
1
t3_rcd6cs
1,639,037,696
LanguageTechnology
CtrlGen Workshop at NeurIPS 2021 (Controllable Generative Modeling in Language and Vision)
Excited by generation, control, and disentanglement? Come to our CtrlGen controllable generation workshop ([https://ctrlgenworkshop.github.io](https://ctrlgenworkshop.github.io/?fbclid=IwAR2lx-sDgf_snUoI16g79geBeAHJ__6i9Wd6duQQbJRlrg4xI76jDutg9iA)) at NeurIPS next Monday, December 13th! We feature a mix of 7 talks on the latest in controllable generation, a live QA + panel discussion, poster presentations of several interesting works, creative demos of controllable generation systems, and networking opportunities. This is an effort organized with researchers from Stanford, CMU, Microsoft, Dataminr, and the University of Minnesota. Our invited speakers and panelists include researchers from Facebook, Google, DeepMind, University of Washington, New York University, Stanford, and Tel-Aviv University.
0.88
t3_rc5z8r
1,639,012,352
LanguageTechnology
Numerizer - Spacy powered Streamlit deployed on Hugging Face for Free - Applied NLP Tutorial
nan
0.86
t3_rbyxw5
1,638,991,744
LanguageTechnology
Meta AI Develops A Conversational Parser For On-Device Voice Assistants
A variety of devices such as computers, smart speakers, cellphones, etc., utilize conversational assistants for helping users with tasks ranging from calendar management to weather forecasting. These assistants employ semantic parsing to turn a user’s request into a structured form with intents and slots that may be executed later. However, to access larger models operating in the cloud, the request frequently needs to go off-device. Complex semantic parsers use seq2seq modeling. Auto-regressive generation (token by token) has a latency that makes such models impractical for on-device modeling.  Facebook/Meta AI introduces a new model for on-device assistants and illustrates how to make larger server-side models less computationally expensive. Quick Read: [https://www.marktechpost.com/2021/12/08/meta-ai-develops-a-conversational-parser-for-on-device-voice-assistants/](https://www.marktechpost.com/2021/12/08/meta-ai-develops-a-conversational-parser-for-on-device-voice-assistants/) Paper 1: https://arxiv.org/pdf/2104.04923.pdf Paper 2: [https://arxiv.org/pdf/2104.07275.pdf](https://arxiv.org/pdf/2104.07275.pdf) Facebook Blog: https://ai.facebook.com/blog/building-a-conversational-parser-for-on-device-voice-assistants
0.86
t3_rbv9hq
1,638,981,632
LanguageTechnology
A Visual Guide to Prompt Engineering [With GPT language models]
nan
0.75
t3_rbtfut
1,638,976,384
LanguageTechnology
AI, DL, NLP,.. resources
Hi, Have a look at a great resource on [https://www.techontheedge.com/mobile](https://www.techontheedge.com/mobile?fbclid=IwAR3re_Ej70FRcwJJziNWjwcoHuHpSyrGtYf8CCgn4QiYYRiZfryR5vaAXWw). You will find the very latest news, articles, research in AI, ML, DL, NLP, web, mobile,... and the wider computing space.
0.44
t3_rbqhya
1,638,967,296
LanguageTechnology
Zero-Shot Event Classification for Newsfeeds (including Notebooks and Code examples)
nan
0.88
t3_rbp2ox
1,638,962,048
LanguageTechnology
General character string outlier detection
Hi, I'd like to preface this question by saying that I'm not looking for a solution to a problem in a specific dataset, as I already have these. I'm curious to know if there's a concise method to solve this problem in general. The problem: Detect abnormal character strings from a set of non-language character strings. Effectively, to model normal behaviour as a combination of string and pattern frequency without prior information. Data: Normal string behaviour may be due to high frequency, matching a set of patterns, or some combination of those. Typically they'll either be high frequency with no useful patterns or low frequency with common patterns. They could be as short as two characters and length may vary. They'll be a combination of alphanumeric and punctuation within which there may be a set of specific characters with either absolute or pattern relative location. Both high frequency and common pattern strings may occur with different orders of magnitude. Datasets may be 10-100k in size, with specific string occuring as often as 50% of the time, or as little as once while matching a pattern. Solution: This should be unsupervised, ideally model the frequency/pattern tradeoff itself, and generalise to any dataset described above with minimal intervention. Appreciate any contributions in advance. Even if it's just a set of keywords to describe the problem. Realise this is quite a vague problem statement, but I can detect them myself by eye with a combination of string and regular expression counts, so wondering what the world of NLP (I guess not technically NLP) can do.
1
t3_rbomdu
1,638,960,256
LanguageTechnology
Multiple SEP tokens for keyword searches.
I am trying to train a siamese-net style BERT based retrieval model that supports both semantic queries and keyword queries for my domain. The keywords can be of different categories (for example, some are related to product specifications, some others to usability features, manufacturers etc.) To index a document, I want to use SEP tokens to separate the different categories of keywords and the product description. Eg: [CLS] <Product description> [SEP] <keywords type_1> [SEP] <Keywords type_2> [SEP]... The query can be a product description or a keywords or a combination of the two. My training dataset size is 100K samples. Has anyone used an approach like this? Has it worked?
1
t3_rb9lph
1,638,911,360
LanguageTechnology
List of filler words?
I would like to detect filler words in a spoken English text. I couldn’t find a package:function for it (stop words don’t work). I was wondering if anyone has a compiled list to share or refer to.
0.88
t3_rb6vfo
1,638,904,064
LanguageTechnology
Has anybody tried to retrain Stanza NER on new data?
I have been trying to follow the instructions in this page [https://stanfordnlp.github.io/stanza/training.html#ner-data](https://stanfordnlp.github.io/stanza/training.html#ner-data) to retrain Stanza on a new dataset for NER. I have managed to convert my .iob files (training, development and test datasets) into the .json files required by the model. However, I don't understand where I should put my data to have "run\_ner.py" run successfully. This command is mentioned in the page: python -m stanza.utils.training.run_ner fi_turku But I don't understand what "fi\_turku" is supposed to be. I know it's a sample corpus I can download, but what is it exactly? A directory containing the three .json files? What is the path to it? It seems like the only problem is the path to the new dataset I want to train the model on, but I'm failing to undestand where exactly I should put it.
0.67
t3_rb3tvm
1,638,898,048
LanguageTechnology
How to implement a weighted string classifier that results in an exportable model and also gives a confidence score?
Hi there! I've been recently working on a side project that works vaguely like this: [https://imgur.com/a/g1FvKCM](https://imgur.com/a/g1FvKCM) <-- link to flowchart cause apparently you guys hate images I already built the labeler, as you can see from the image it produces an already cleaned CSV file with the structure >lowercase stopwords removed string i want to classify , \[1/-1\] With 1 or -1 as the value I want to label to that string with. *BTW: the project is all on Python3.* Now, I need to build the classifier with these needs: * Creates an exportable model I can then use in a discriminator, that will process future inputs based on this model * I absolutely need the discriminator to give me a confidence score when evaluating the inputs because I will pick only output with a certain confidence score or higher * Applies a weight based on data recency, last data in the file gets a higher weight * In a scale from 0 "I don't consider this at all" to 1 "this is the most important piece of information I will ever handle" I would like to apply a soft growing weight, something like [this](https://imgur.com/a/mIoaaD8) * I don't actually know if this is possible or not but if possible I would definitely do this even if makes things way more complicated Now, all the tutorials and GitHub repos and videos always went with the Bayes or Linear Regression approach, which I also tried and resulted in a not-that-bad result, with the KPI of AUC around 0.7, but it didn't solve either of the two problems before presented in the bullet list so I'm quite stuck. I did some image processing in the past so I thought it would have been easier to handle strings but until now it's giving me some troubles. I really appreciate any support comment, any indication, guide or study material to look after. Thank you all. **TLDR: Just read the title**
1
t3_rb0d5n
1,638,887,296
LanguageTechnology
[Research paper] Hierarchical Topic Modelling Over Time
Hello Reddit, I am proud to present you HTMOT for Hierarchical Topic Modelling Over Time. This paper proposes a novel topic model able to extract topic hierarchies while also modelling their temporality. Modelling time provide more precise topics by separating lexically close but temporally distinct topics while modelling hierarchy provides a more detailed view of the content of a document corpus. [https://arxiv.org/abs/2112.03104](https://arxiv.org/abs/2112.03104) The code is easily accessible on GitHub and a working interface provides the ability to navigate through the resulting topic tree with ease: [https://github.com/JudicaelPoumay/HTMOT](https://github.com/JudicaelPoumay/HTMOT)
0.92
t3_rav3g5
1,638,868,352
LanguageTechnology
(Okapi) BM25 with using hierarchically clusterized keywords
Hey, all! Hope you are doing well! Do you know any work which tries to do Okapi BM25 matching using hierarchically clusterized words? Relabeling all tokens of a subtree to the same value would combine similar words into the same token_id. Lower subtrees imply in closer words This would be a query and document enrichment. And now, with robust word embeddings and clustering algorithms, this approach seems feasible. Also this is a quite immediate idea so someone must have already done it. Do you know any work on this? Cheersss
1
t3_rakwq0
1,638,833,408
LanguageTechnology
Language education architecture
Hi all, I'm relatively new to the language domain - designing services that support language education. What's the best practice for associating metadata to words and sentences? This would include audio, video, pronunciation, and other words/sentences considered related. I've been reading up on various NLP corpus functionality - which seems lower level (i.e. either just the text, or some structure that is pretty specific) Even multi-modal corpus doesn't seem to cover everything. Am I getting the correct sense here? I've seen references to Lexical Resources - which seems like the right direction, but I don't see any dominant libraries for that (I'm a python guy.) It seems somewhat straight forward to have a persistent lookup, especially if I assign an index key to all the words and sentences that I can then base the metadata on. But I don't want to reinvent a wheel unnecessarily.
1
t3_rai9z2
1,638,826,112
LanguageTechnology
Is there an open-source way to replicate entity-level sentiment from Google's Cloud Natural Language API?
I'm learning about NLP and was really impressed with Google's Natural Language API ([demo](https://cloud.google.com/natural-language#section-2)). It seems that entity-level sentiment analysis is the future of NLP. Has anyone in the community come across open-source libraries that replicate the API for learning purposes? I found an excellent [repo](https://github.com/songyouwei/ABSA-PyTorch) called ABSA-PyTorch but it seems that all the implementations are classification-based; that is, they return "positive/negative" rather than a spectrum between positive and negative. Is there a sub field of Aspect-Based Sentiment Analysis (ABSA) that isn't classification based? I wasn't able to find any keywords despite hours of Google searching.
1
t3_rahmej
1,638,824,448
LanguageTechnology
Need help with clustering keywords
I have a set of keywords, and can extract similar keywords using word2vec model (with cosine similarity scores) or can calculate similarity scores from BERT model.. I need to cluster the keywords which would be semantically similar. Any help with the type of cluster would be appreciated. Just need a discussion before I try to implement.
1
t3_rae2xx
1,638,815,360
LanguageTechnology
Fine tuning BERT for token classification.
Hello guys, I have a question regarding my work. I am pretty new at NLP. I want to try self supervision and semi supervised learning for my task in hand. The task relates to token wise classification for the 2 sequence of sentences (source and translated text). The labels would be just 0 and 1 determining if the word level translation is good or bad on both source and target side. To begin, i used XLMRoberta as ai thought it would be best suited for my problem. First, I just trained normally using nothing fancy but model overfits after just one or two epochs, as i have very less data to fine tune on (approx 7k). I decided to freeze the bert layers and just train the classifier weights but it performed worse. I thought of adding more dense network on top of BERT but i am not sure if it would work good or not. One more thought that occurred to be was data augmentation where i increased the size of my data by multiple factors but that performed bad as well. (Also i am not sure what should be the proper number to increase the datasize with augmented data) Can you please suggest which approach could be suitable here and if i am doing something wrong. Shall i just use all the layers for my data or freezing is actually a good option? Or you suspect i am ruining somewhere in the code and this is not what is expected. I know i have many questions but you are free to help as much as you can :) Thanks a ton in advance.
0.84
t3_ra9k6s
1,638,803,840
LanguageTechnology
How to capture words order in a sentence?
Hi guys, i'm data science student and i'm trying to capture the words order in a sentence for checking if in n triples (subject, predicate and object) this order is respected. For example, given the phrase "Rougue is a comedy movie" and given these 3 triples: 1. \[Rougue, is, movie\] 2. \[Rougue, movie, is\] 3. \[Movie, is, Rogue\] In this example, only the first triple is correct (for my task). I guess that in order to achieve my goal I have to vectorize the reference sentence, but I don't understand how to capture the correct order so that only the first triple turns out to be right. How can i do this? Thanks all. EDIT My dataset, for more than 90%, is composed of simple sentences where the order S + V + O is respected, so even the triples that are extracted, as a second check, should follow this order. As a first check, I thought about using the Bag of Words or TF-IDF to check the presence of the words contained in the extracted triple within the reference sentence. However, it is still not clear to me how to check if the word order within the triple is respected. I know it's a coarse job, however it serves as a basic control for skimming.
0.91
t3_ra4f16
1,638,787,200
LanguageTechnology
How to use Textblob for semantic analysis?
I'm using Textblob to identify if a paragraph text is positive or negative. I'm new to Textblob, for my data I cleaned the data (remove stop word , extend word , punctuations..etc) tokenized the text into sentences then into words then performed lemmatization then applied Textblob to lemmatize data. I read that Textblob do all of these as well as pos tag when calling TextBlob() I was wondering do I need all the steps that I performed before or will calling Textblob be enough?
0.78
t3_r9r2xq
1,638,742,784
LanguageTechnology
sort 150k facebook posts in hebrew to 3 defined topics
I have an existing list with 150k public posts that were extracted from Facebook for academic research purposes, they are all in Hebrew. I need to tag each post to one of 3 categories: General news, Political Related, Other. I know these categories are a bit vague. Is there a tool/method I can use to train a model to sort these posts to the categories? I'm not an expert in ML or NLP so I will just clarify what I mean: A tool where I can tag a few thousand posts according to the categories and then let the model tag the rest of the posts automatically. \*The posts cannot be translated to English ​ Thanks!
0.87
t3_r9hldc
1,638,716,800
LanguageTechnology
Reproducing WebNLG Challenge 2017 on OpenNMT-py
Hi guys, I'm Data Science student and i'm learning to use OpenNMT-py for my master degree thesis. I reproduced the challenge with the old deprecated repository, now I would like to replicate it with the updated repository (as I will need it for a similar task within my thesis). I am now approaching the NLP field, but I am not very clear about some things: * since it is not a translation task, is it necessary to build a vocabulary like in the machine translation OpenNMT-py tutorial? * The epochs command I noticed has been deprecated, now it works with train\_steps, however I am not clear about the "conversion", so to speak. With the old repository the number of epochs to train the model with was 13. I tried this by looking at old problems from these repositories: default train\_steps (100000) / deault batch\_size (64) \* 13 (epochs number of the old repository) = 20313. is this reasoning correct? Thanks everyone for your attention.
1
t3_r9cyld
1,638,700,032
LanguageTechnology
What is the difference between Rule-Based & Feature-Based methods in sentiment analysis?
I use Textblob to get lable value of texts (positive text or negative) and then used logistics regression for training and prediction , Is this feature methods or rule based method?
0.8
t3_r9bb4p
1,638,693,120
LanguageTechnology
What are the leading knowledge evaluation models?
I'm new to NLP and ML. I've been playing with GPTJ and other stuff provided by huggingface. I'm also playing with compromise and nltk. I have some ideas I want to try with regards to knowledge extraction from multiple sources. One problem I imagine is, what are the preferred ways to evaluate the truthiness of a statement? I see that T0PP can extract information from within a contained context, but what about from the unbounded context of reality? If anyone can help me out with clues or ideas that would be awesome! Thanks gang.
0.93
t3_r8dtkk
1,638,579,968
LanguageTechnology
What is topic modeling and how can it help with sentiment analysis?
If I apply it to my data will it change the outcome of my sentiment analysis?
0.71
t3_r80t23
1,638,543,104
LanguageTechnology
Need ideas for a story generator
Hi, I am working on a story generator and rather than fine tuning a pre-trained model, I need more ideas to make it interesting, like maybe how could i make it so it generate the beginning, body and end of the story. Share your thoughts pls thx
0.78
t3_r7vmvs
1,638,525,056
LanguageTechnology
Doubt about the originality of a submisssion in ICLR2022
*Hindsight: Posterior-guided training  of retrievers for improved open-ended generation.* is a manuscript submitted to ICLR2022 conference and received relatively very high scores (8 8 6 6). But I doubt the originality of this work since it is very similar to \[1\] and \[2\]. What do you think?Is this a article spining? \[1\] Lian, Rongzhong, et al. "Learning to select knowledge for response generation in dialog systems." *arXiv preprint arXiv:1902.04911* (2019). \[2\] Kim, Byeongchang, Jaewoo Ahn, and Gunhee Kim. "Sequential latent knowledge selection for knowledge-grounded dialogue." *arXiv preprint arXiv:2002.07510* (2020).
1
t3_r7sy6b
1,638,514,304
LanguageTechnology
Help with Sentence splitting
Hey! I’m using Python with spacy library. Can anyone help me with sentence splitting. I have some court decisions to analyze. How can I write a sentence splitting expansion that it don’t split inside of quotes?
1
t3_r7sapp
1,638,512,000
LanguageTechnology
Low rent automated essay scoring
I am building an online elementary history course and I’d like to ask students to write a paragraph on an inquiry question. E.g. How did the seven years war help cause the revolutionary war? Unfortunately I don’t have human graders, or a dataset of graded responses or an NLP/ML programmer for that matter. I’m thinking I could just count the number of sentences and number of key phrases the student mentions for low rent automated essay scoring. It might be labor intensive to come up with variation of the keywords. Does anyone know of open source or commercial solutions like this that work well? Goal is to give the student enough feedback/scaffolding so that they feel like it is worth writing down their thoughts.
1
t3_r7moad
1,638,494,464
LanguageTechnology
Topic Model - Generating more Context
I am new to Python and just started playing around with LDA - using pyLDAvis to visualize the keywords from a few documents. I’m a novice at best. The problem: I find it’s difficult to determine accurate topics (for the LDA model) that explain the list of keywords because there is not enough context to frame the topic. Maybe my model sucks. Does anyone know how to do the following? 1.) Extract all of the sentences from a [file](https://www.mckinsey.com/business-functions/people-and-organizational-performance/our-insights/building-workforce-skills-at-scale-to-thrive-during-and-after-the-covid-19-crisis) (URL for the sake of this query) that specifically contain the most salient terms from my LDA model e.g., Skills, Digital, Pandemic, etc. prior to any pre-processing. The way I see this looking (in a df) is that in Column A there would be a salient term e.g., Skill, and column B would contain the sentences i.e., context to the topics in the LDA model. I would appreciate any guidance on this. Cheers
1
t3_r7djy6
1,638,468,992
LanguageTechnology
Literature on plain language generation?
Hello. Does anybody know of any literature on [plain language](https://en.wikipedia.org/wiki/Plain_language) generation? This might be considered a kind of summarisation or paraphrasing task – or maybe even a type of machine translation task – but I'm wondering if there are any recent papers specifically on plain language. E.g., {It is incumbent on the buyer to furnish all requisite documents.}=>{The buyer must provide the necessary paperwork.} Thanks in advance.
0.5
t3_r79793
1,638,456,960
LanguageTechnology
Formal grammar parser for English
Hello, I am looking for a parser for English, not a dependency parser, but a formal grammar parser (i.e. One that makes tree with rules such as S -> NP VP, VP -> V NP, and so on...) I thought that finding one would be easy but when I searched I couldn't find any good ones. I only found one library that gives wrong parsing for some very simple sentences. Any suggestions out there? Please help I need this for a class project and the deadline is close.
1
t3_r6w34h
1,638,411,392
LanguageTechnology
Cohen's kappa — worth the hype?
I often see subtle misuses of interrater reliability metrics. For example, imagine you're running a Search Relevance task, where search raters label query/result pairs on a 5-point scale: Very Relevant (+2), Slightly Relevant (+1), Okay (0), Slightly Irrelevant (-1), Very Irrelevant (-2). Marking "Very Relevant" vs. "Slightly Relevant" isn't a big difference, but "Very Relevant" vs. "Very Irrelevant" is. However, most IRR calculations don't take this kind of ordering into account, so it gets ignored. I wrote [an introduction to Cohen's kappa](https://www.surgehq.ai/blog/inter-rater-reliability-metrics-understanding-cohens-kappa) (a rather simplistic and flawed metric, but a good starting point to understanding IRR). Hope it helps. I welcome feedback and am curious to hear the IRR metrics you find yourself relying on most.
0.96
t3_r6pmce
1,638,393,856
LanguageTechnology
Custom training issue: best_model_ranking not outputting for certain ConLL files
Hi, I have successfully trained a custom model using the neuralcoref for a set of ConLL files. However, when I add more from another set I get this error: FileNotFoundError: [Errno 2] No such file or directory: '...best_modelallpairs' best\_model\_ranking (the custom model I used for coreference resolution with neuralcoref) is not present in the checkpoints folder. Have you encountered this error before? I think it might be because the token distance between coreferences is too long in some of the new ConLL files. Do you have any ideas? Thank you very much.
1
t3_r6l91f
1,638,382,720
LanguageTechnology
Getting aligned vector representations in two languages
Any model (links or references) that can provide me with vector representations of semantically similar sentences from two languages. For ex English and Croatian. *Ex* **Ja volim Kavu** and **I love coffee** having similar vectors
1
t3_r6bzq2
1,638,355,328
LanguageTechnology
How to highlight intonation, word stress in a text?
I'm very new to NLP. When I give a text file, I want the output to be highlighted with colors to indicate the amount of stress each word should have. Using spacy it's possible to highlight Parts of speech. I also searched on this sub but couldn't find anything related to word stress. Google search provided this result. [https://stackoverflow.com/questions/58251398/how-to-detect-sentence-stress-by-python-nlp-packages-spacy-or-nltk](https://stackoverflow.com/questions/58251398/how-to-detect-sentence-stress-by-python-nlp-packages-spacy-or-nltk) I have the same question too and the answers on SO are focused on speech but not on text.
0.86
t3_r6byqn
1,638,355,200
LanguageTechnology
What is the difference between text classification and semantic analysis?
nan
0.8
t3_r69kbo
1,638,345,216
LanguageTechnology
Question about Good Turing Smoothing
As in [https://youtu.be/1vUVNdDkIJI?t=485](https://youtu.be/1vUVNdDkIJI?t=485) , I do not understand how the `c*` formula is being obtained ? It seems different from equation (2) of [https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec11.pdf#page=2](https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec11.pdf#page=2)
1
t3_r5azop
1,638,235,136
LanguageTechnology
NLP with Arabic language
Hi, I am new to arabic related nlp and don't have the sense of the actual language. But as par my understanding, there are vowels named 'tashkeel' and 'harakat'. I just don't understand why we would need to remove the vowels ( strip\_tashkeel and strip\_harakat functions in pyarabic) from Arabic text before processing it further? also I am not getting any good answer regarding it. TIA for your help.
0.67
t3_r55w1g
1,638,220,544
LanguageTechnology
Best available pronoun coreference resolution systems?
I wanna study singular they and what its status is in current NLP research. Not looking for answers on here, just for pointers. 1. What are the best systems for pronoun coreference resolution? 2. Have you come across something related to singular they in NLP? For example in pronounc coreference resolution? Curious to hear what you know!
0.94
t3_r52lzh
1,638,211,712
LanguageTechnology
Evaluating performance of BERT fine tuning for classification
Hello all! I am working on fine tuning BERT for classification using a custom dataset. NLP is not my area of expertise but BERT seems like a great tool for conducting classification problems. I was wondering if anyone can give me some metrics I can use to evaluate the performance of my classification model? Thanks in advance!
0.63
t3_r4tla1
1,638,184,192
LanguageTechnology
Looking for NLP cloud-based technologies expert/consultant
Hello everyone, My team is currently looking for experts/consultants in cloud-based NLP/Text Mining technologies. We are developing a platform that aggregates the best AI engines on the market but some of our prospects want to be supported by specialists in their projects. We are therefore looking for experts for this step of personalized audit (paid) before using our platform. If you are interested, please send me a message or an email: [contact@edenai.co](mailto:contact@edenai.co)! Thank you, Taha
0.75
t3_r4s9tz
1,638,178,688
LanguageTechnology
Sentiment Analysis Questions
Hi all, I'm working on a final project for one of my classes involving sentiment analysis on a data set of IMDB movie reviews (data set courtesy of keras). It's a fairly straightforward binary classification (classify the review as positive or negative). The thing is, I'm a little short on ideas with regards to how to accomplish this. I've already utilized a few models but I feel like they're a little simple, and in the interest of getting into the spirit of things I was wondering if there were more advanced techniques that a novice like me could still use. For what I've already done (all inputs are word embeddings of the dataset imported from keras): 1. CNN 2. LSTM 3. Transformer 4. CNN-LSTM 5. LSTM-SVM Thus far, I've achieved an accuracy of 87-89% for all but the last one, which performed significantly more poorly. I'd say I'm pretty much done with the project (nothing too complex) but I'm interested in seeing what else I can apply for the sake of it. A few things I've been looking at but not sure about implementing: 1. Generative-discriminative models (the generator would perform feature extraction on the data) 2. Data augmentation techniques to use on the data and then feed into the aforementioned models (thus far they've been fairly simple, like swapping in synonyms or antonyms, randomly deleting or adding words, etc.) Any recommended course of action/source for the two ideas I've been looking at? And do you have any other suggestions that I could feasibly implement? I'm using Google Colab (got a premium membership) and the data set isn't very large (25,000 training samples) so computational expense should not be a concern. I'd appreciate any suggestions you guys have!
0.72
t3_r4hcd2
1,638,141,696
LanguageTechnology
Query Intent Classification in chatbots using distilled transformers
Hi I am writing a paper about Query Intent Classification in chatbots and would like to also have a section about distilled Transformers (fx distilBERT) , but have been unsuccessful in finding papers or chatbot companies that use such models. Are distilled transformers simply not used for chatbots? - it seems like a good trade off in terms of use of resources and general performance, like precision. Are there any good papers or company blogs about deploying Distilled Transformer Models in Chabot settings?
0.85
t3_r46zry
1,638,113,920
LanguageTechnology
nlp project
Hello everyone,I want to know how to learn nlp. could you please give me some advices ? in fact, these days I learned some basic model like cnn, rnn, lstm, transformer, belt. but I don't know how to imply these knowledge into project, in other words, could you please recommend me some interesting projects for me? thank you so much!
0.54
t3_r3ib94
1,638,034,176
LanguageTechnology
MetaICL: A New Few-Shot Learning Method Where A Language Model Is Meta-Trained To Learn To In-Context Learn
Large language models (LMs) are capable of in-context learning, which involves conditioning on a few training examples and predicting which tokens will best complete a test input. This type of learning shows promising results because the model learns a new task solely by inference, with no parameter modifications. However, the model’s performance significantly lags behind supervised fine-tuning. In addition, the results show high variance, which can make it difficult to engineer the templates required to convert existing tasks to this format. Researchers from Facebook AI, the University of Washington, and the Allen Institute for AI have developed Meta-training for In-Context Learning (MetaICL), a new few-shot learning meta-training paradigm. In this approach, LM is meta-trained to learn in context, conditioning on training instances to recover the task and generate predictions. **You can read** [**a short summary-based article**](https://www.marktechpost.com/2021/11/26/metaicl-a-new-few-shot-learning-method-where-a-language-model-is-meta-trained-to-learn-to-in-context-learn/) [**here**](https://www.marktechpost.com/2021/11/26/metaicl-a-new-few-shot-learning-method-where-a-language-model-is-meta-trained-to-learn-to-in-context-learn/)**. The Github can be** [**accessed here**](https://github.com/facebookresearch/metaicl)**. If you are looking to read the full paper, then you can** [**read it here**](https://arxiv.org/pdf/2110.15943.pdf)**. The demo** [**project is here**](http://qa.cs.washington.edu:2021/)**.**
1
t3_r2sfyk
1,637,948,032
LanguageTechnology
What should I visualize for humor detection model to gain some useful insight?
I was going through bunch ([1][1],[2][2],[3][3]) of humor detection paper. But most papers don't include any visualizations, say some graph related to model being trained. I was thinking to train some language models like BERT, GPT, XLNet. But was guessing what kind of some interesting visualization should I aim for in order to gather the data during training and gain some sort of insight. Or is it like that these fine-tuning or zero/one/few shot learning based models don't have to train for long and does not involve significant learning "from scratch" or they are somewhat black boxes, that's why there is nothing much to visualize? [1]: https://cs224d.stanford.edu/reports/OliveiraLuke.pdf [2]: https://arxiv.org/pdf/1909.00252.pdf [3]: https://arxiv.org/pdf/2004.12765v5.pdf
1
t3_r2rvzw
1,637,946,368
LanguageTechnology
(Need opinion) Sentiment Analysis for long text
I was planning to perform sentiment analysis for news articles but after reading this post it seem that it will not be easy, I have an assessment and I was asked to perform 2 NLP tasks so I chose sentiment analysis and summarisation Now I'll perform sentiment for article title (positive or negative) and then summarise the article https://datascience.stackexchange.com/questions/82313/sentiment-analysis-on-long-and-structured-texts I have 2 questions: 1- Is there away to perform sentiment to long text? 2- From bussiness perspective is my idea is good (examples xx company don't have time to read every article about their company from different news outlets, this project will help in analysing articles and determine if they are positive or negative based in the title and the team can choose to read rhe summary of each negative article)
1
t3_r2qkk6
1,637,942,528
LanguageTechnology
Learning how to read papers
Hello everyone! I need some advice on learning how to read academic papers in NLP. I really struggled to understand some of the papers I had to read for my research in undergrad... I could take several days just to get through one paper. At first, I attributed this struggle to my lack of ML/DL experience. This was definitely a factor, but even after I filled some of those gaps in my knowledge, I still have trouble understanding new papers. For example, I can conceptually understand Word2Vec because there are several great video lectures or tutorials that explain it, but I probably wouldn't be able to understand it just from reading the paper by itself. I'm really interested in academia, but I definitely need to get better at reading papers. I would greatly appreciate any advice or recommendations of papers to start with.
0.93
t3_r2imrd
1,637,915,520
LanguageTechnology
Dependency graph
Dear all, Can you suggest me a free, good, and easy tool to obtain dependency graphs (as arcs, lines, and so on between nodes), starting from a CONNLU file? Many thanks
1
t3_r2iiji
1,637,915,008
LanguageTechnology
Creating numeric word representation of input sentences resulting in MemoryError
I am trying to use [`CountVectorizer`](https://scikit-learn.org/stable/modules/feature_extraction.html#common-vectorizer-usage) to obtain word numerical word representation of data which is essentialy list of 160000 English sentences: import pandas as pd import numpy as np from sklearn.feature_extraction.text import CountVectorizer df_train = pd.read_csv('data/train.csv') vectorizer = CountVectorizer(ngram_range=(1, 2), token_pattern=r'\b\w+\b', min_df=1) X = vectorizer.fit_transform(list(df_train.text)) Then printing `X`: >>> X <160000x693699 sparse matrix of type '<class 'numpy.int64'>' with 3721191 stored elements in Compressed Sparse Row format> But converting the whole to array to get the numerical word representation of all data gives: >>> X.toarray() --------------------------------------------------------------------------- MemoryError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_11636/854451212.py in <module> ----> 1 X.toarray() c:\users\crrma\.virtualenvs\humor-detection-2-8vpiokuk\lib\site-packages\scipy\sparse\compressed.py in toarray(self, order, out) 1037 if out is None and order is None: 1038 order = self._swap('cf')[0] -> 1039 out = self._process_toarray_args(order, out) 1040 if not (out.flags.c_contiguous or out.flags.f_contiguous): 1041 raise ValueError('Output array must be C or F contiguous') c:\users\crrma\.virtualenvs\humor-detection-2-8vpiokuk\lib\site-packages\scipy\sparse\base.py in _process_toarray_args(self, order, out) 1200 return out 1201 else: -> 1202 return np.zeros(self.shape, dtype=self.dtype, order=order) 1203 1204 MemoryError: Unable to allocate 827. GiB for an array with shape (160000, 693699) and data type int64 For the example in the linked schikit learn [doc page](https://scikit-learn.org/stable/modules/feature_extraction.html#common-vectorizer-usage), they have used only five sentences. Thus, for them `X.toarray()` seem to have returned the array of numerical word representation. But since my dataset contains 160000 sentences, (in error message) it seems that it is resulting in vocabulary of size 693699 (which contains both unique unigrams and bigrams, due to `ngram_range` parameter passed to `CountVectorizer`) and hence facing insufficient memory issue. **Q1.** How can I fix this? I am thinking to simply reject `X` and separately transform in mini batches as shown below. Is this correct? >>> X_batch = list(df_train[:10].text) # do this for 160000 / batch_size batches >>> X_batch_encoding = vectorizer.transform(X_batch).toarray() >>> X_batch_encoding array([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], dtype=int64) >>> X_batch_encoding[0].shape (693699,) **Q2.** I am thinking to train neural network and decision tree on this encoding for humor detection. But I guess it wont be great idea to have 693699 length vector to represent single sentence. Right? If yes, what should I do instead? Should I opt to use only unigrams while fitting `CountVectorizer` (but it will not capture even minimal context of words, unlike bigrams) ? PS: I am creating baseline for humor detection, I am required to use `CountVectorizer`.
1
t3_r24ytp
1,637,870,208
LanguageTechnology
Is there a corpus of English words + their language of origin?
Specifically, I want to be able to determine if a word is of Native origin... I've been searching around but I just can't believe there's no corpus for words plus their etymologies... apologies if dumb question
1
t3_r2483t
1,637,868,160
LanguageTechnology
Text summarization
Hello, Is there any real world text summarization project example ? Ty
0.67
t3_r22kqe
1,637,863,680
LanguageTechnology
Any Historical Newspaper Headline Datasets? Like from WW2 to present?
I'm working on a social science project where we want to grab newspaper headlines from one or more mainstream (preferably US but UK is fine) media outlets. So far I haven't been able to find any. Can anyone point to one that might be available? It does have to be free.
1
t3_r220on
1,637,862,144
LanguageTechnology
No DOI for Published Paper
I recently published a paper to Findings in EMNLP: [https://aclanthology.org/2021.findings-emnlp.143/](https://aclanthology.org/2021.findings-emnlp.143/) I am trying to update my Arxiv submission, but I don't see a DOI here. Where can I find this information? Thanks!
0.67
t3_r21jqo
1,637,860,992
LanguageTechnology
How important it is to give sentences to BERT tokenizer rather than the whole text?
I'm currently working on document classification, where every document has many sentences within. I intend to use BERT sequence classifier for the task, however as I check out the tokenization results of BERT, I saw that the special token \[SEP\] is only added at the end of the document, rather than replacing every period in the text - as they are my end of sentence marks. However, I saw that Bert gave "." punctuations a specific ID, which means it has some meaning to BERT already. My question is, should I go ahead and have only \[SEP\] at the end of the document and hope that the ID corresponding to the punctuation marks can distinguish the sentence-level information, or should I re-do my tokenization while I give the texts sentence by sentence, and then merge the id's into a single vector later? There must be a better way though. I believe knowing where a sentence begins and ends is important for the classification task, so I'm open to suggestions.
0.92
t3_r1yw1r
1,637,853,824
LanguageTechnology
Companion Texts to Jurafsky and Martin's Speech and Language Processing?
The text is really dense and it's a bit hard to understand at times. Anyone know good companion texts that explain the content more? Maybe with some python examples?
0.92
t3_r1f71s
1,637,787,904
LanguageTechnology
Tutorial to build Deep Learning Punctuation Corrector in Python
nan
0.78
t3_r1cp3z
1,637,780,992
LanguageTechnology
Neural edit-tree lemmatization for spaCy
nan
0.81
t3_r1c536
1,637,779,584
LanguageTechnology
No data no problem, unsupervised learning and sentence transformers
Hi all, I put together [an article and video covering TSDAE fine-tuning](https://www.pinecone.io/learn/unsupervised-training-sentence-transformers/) for sentence transformer models. Basically, how we can use plain unstructured text data to fine-tune a sentence transformer (not quite *no* data, but close!). From the TSDAE paper, you actually only need something like 10-100K sentences to fine-tune a pretrained transformer for producing pretty good sentence embeddings. I was achieving same STSb evaluation with a TSDAE train BERT as I was getting with my own NLI (labeled) dataset trained BERT (using softmax loss). So pretty cool imo - although in reality supervised methods produce better performing models, if you have no labeled data, unsupervised is the way to go. It was really cool learning about this, planning to do more on unsupervised sentence transformers in the future - let me know what you think!
0.57
t3_r195uq
1,637,771,648
LanguageTechnology
Evaluating quality of synthetically generated questions dataset
Hi all, I have an NLP related question for you! I have synthetically generated questions in a SQuAD like format (context, question, answer triplets). The data consists of domain specific questions in Dutch as the questions are generated from 1500+ Dutch technical manuals. How can I evaluate the quality of this dataset and therefore the quality of my question generator? Many, many thanks in advance!
0.81
t3_r1166j
1,637,745,664
LanguageTechnology
Best Way to Identify if a Social Post is Written by a Doctor vs Patient?
Hello fellow NLP nerds :) I had a tricky question that I'd love to crowdsource some solutions for. Problem: I'm trying to clean out all the social posts written by doctors vs patients. I've already started to separate based on typical identifiers such as "as a patient" vs "being a doctor" or "my patient" vs "my doctor" and "patient here" vs "I treated patient". The issue is that this process of coming up with ways a patient self identifies compared to a doctor is extremely manual on the upfront. I wanted to check if the community knew of any libraries, previous code or other research that could help speed things up? Any and all ideas, thoughts and suggestions are more than welcomed. Always all the best, NE
0.84
t3_r0qv0d
1,637,711,616
LanguageTechnology
Summaries readability improvement
I'm doing my research with multi-document summarization for domain-specific texts. We want to show summaries that we generate using our approach (and state-of-the-arts) to domain experts for readability evaluation. Summaries that we generate are pretty good, but hard to read for real people. Could you recommend some python libraries for automatic improvement of readability (capitalization, punctuation, finding orthographic mistakes, etc.)
1
t3_r0j4ro
1,637,690,624
LanguageTechnology
NLP thesis ideas?
I am currently doing a postgraduate Computer Science conversion course in the UK and did English Language and Linguistics for my undergrad. I know that I want to combine both fields for my postgrad thesis but I don’t know anything about NLP. I know that there are tons of material out there but I don’t know where to begin. Posting here in the hopes that someone could guide me to some places for NLP or even just give me some ideas for possible avenues to follow for my thesis. Any suggestions would be greatly appreciated
0.93
t3_r0djf0
1,637,675,008
LanguageTechnology
Classifying documents in categories using keyword sets, without ML
Hi I am trying to classify documents in categories for which I have lists of keywords. Ideally the solution should not use machine learning. I was thinking of creating vectors of both the document and the keywords for each category, and consequently calculating cosine similarity in order to see which category has the highest match. However, as cosine similarity is aimed at comparing 2 documents rather than 1 document and a list of keywords, I was wondering if this was the ideal solution. Any feedback on either oh this would be highly appreciated. Whether it is optimising the cosine sim, a different approach or proposing ML anyway,... all feedback is welcomed :). Thanks in advance.
1
t3_r0d1gd
1,637,673,472
LanguageTechnology
Speech Emotion Classifcation
We all know that in ASR problem (Audio Speech Recognotion) is all about extracting features so called spectograms and waveforms and making a prediction based on it. So the text or the actual meaning of the text is not the main thing. So the question is, if i have a have AUDIO SPEECH RECOGNITION model that is trained lets say in english, can I use it to predict things on lets on Russian? Would it show similar accuracy on Russian as on predicting english sentences?
0.81
t3_r05vxl
1,637,645,312
LanguageTechnology
Identifying sections from corpus of documents.
Hello everyone, Recently I have been looking for ways to identify sections in pdf documents where sections are not separated from one another. I was wondering if anyone has any good paper suggestions regarding this. I already read this paper http://ceur-ws.org/Vol-710/paper23.pdf but just wondering if anyone has additional suggestions for papers that tackle this problem. Thanks in advance for all your suggestions.
1
t3_r01mwc
1,637,631,488
LanguageTechnology
Which weights do we use to get embedding matrix for CBOW?
I originally asked this question at StackOverflow. However I couldn't get any answers there as usual. [https://datascience.stackexchange.com/questions/104332/how-to-get-word-embedding-in-cbow](https://datascience.stackexchange.com/questions/104332/how-to-get-word-embedding-in-cbow) ​ So the problem is for skip-gram we take the weights of the input to multiply and get the embedding matrix as a result. However in case of cbow we take the weights of the input, but there are multiple inputs! Which one do we take? I couldn't find any answers about this. Can someone explain? (For diagrams refer to the link pls)
1
t3_qzpgkc
1,637,598,976
LanguageTechnology
Meta/Facebook AI Releases XLS-R: A Self-Supervised Multilingual Model Trained On 128 Languages For A Variety Of Speech Tasks
Talking to one another is a natural way for people to engage. With advancing speech technology, people are now interacting with devices in day to day lives. Despite this, speech technology is only available for a small percentage of the world’s languages. Few-shot learning and even unsupervised speech recognition can be helpful, but the effectiveness of these methods is dependent on the quality of the self-supervised model. A recent Facebook study presents [XLS-R](https://arxiv.org/pdf/2111.09296.pdf), a new self-supervised model for a range of speech tasks. By training on approximately ten times more public data in more than twice as many languages, XLS-R significantly outperforms previous multilingual models. Quick Read: [https://www.marktechpost.com/2021/11/22/meta-facebook-ai-releases-xls-r-a-new-self-supervised-model-for-a-variety-of-speech-tasks/](https://www.marktechpost.com/2021/11/22/meta-facebook-ai-releases-xls-r-a-new-self-supervised-model-for-a-variety-of-speech-tasks/) Paper: https://arxiv.org/abs/2111.09296? Github: [https://github.com/pytorch/fairseq/tree/main/examples/wav2vec/xlsr](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec/xlsr) Facebook Blog: [https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages)
1
t3_qzpdhc
1,637,598,720
LanguageTechnology
Replay webnlg challenge 2017 using T-rex dataset
Hi everybody, i'm data science student and i'm very newbie in NLP task. i'm learning how to use openNMT for my master degree thesis. I completed the webnlg challenge 2017 tutorial ([https://webnlg-challenge.loria.fr/challenge\_2017/](https://webnlg-challenge.loria.fr/challenge_2017/)), now i would to apply this tutorial to T-rex sample dataset ([https://hadyelsahar.github.io/t-rex/downloads/](https://hadyelsahar.github.io/t-rex/downloads/)). ​ My question is: how can i prepare the T-rex sample dataset like webnlg challenge 2017 dataset ([https://gitlab.com/shimorina/webnlg-dataset/-/tree/master/webnlg\_challenge\_2017](https://gitlab.com/shimorina/webnlg-dataset/-/tree/master/webnlg_challenge_2017)). ​ Thanks all.
0.84
t3_qzp0x0
1,637,597,696
LanguageTechnology
Does anyone know of a list of NLP/SpeechTech non-profits?
How's my favourite community? I'm currently doing my Master's at the University of Groningen's new Voice Technology programme. So far the only groups who have reached out offering thesis project proposals have been your typical, big data tech companies, or companies with government(/policing)-adjacent projects. I'm looking to solicit some non-profits, NGOs, and maybe the healthcare sector for thesis project proposals. Does anyone know of a resource where I could get a list of these sorts of organizations? Many thanks!
0.75
t3_qzow1b
1,637,597,312
LanguageTechnology
Add-K smoothing
nan
1
t3_qznetk
1,637,593,472
LanguageTechnology
Mentoring SemEval2022 projects
Hey everyone, I have seen some people here working on/interested in the SemEval projects. I was wondering if some folks would like me to mentor their projects. Having published at venues like ACL, EACL, and SemEval2021, I believe I have some knowledge about academic paper writing and NLP literature in general. Also, I believe it would allow everyone to network as well, which is always a great thing. :)
0.81
t3_qzma02
1,637,590,272
LanguageTechnology
[project-showcase] - zeroshot_topics: Label your text data automatically!
zeroshot\_topics: [Github link](https://github.com/AnjanaRita/zeroshot_topics) Hand-labelled training sets are expensive and time-consuming to create usually. Some datasets call for domain expertise (eg: medical/finance datasets etc). Given these factors around costs and inflexibility of hand-labelling, it would be nice if there are tools that can help us get started quickly with a minimal labelled dataset - enter weak supervision. **But what if you do not have any labelled data at all? is there a way to still label your data automatically in some way?** That's where **zeroshot\_topics** might be useful! to help you to be up and running quickly. *zeroshot\_topics* lets you do exactly that! it leverages the power of zero-shot-classifiers, transformers & knowledge graphs to automatically suggest labels/topics from your text data. all you need to do is point it towards your data. ​ please check this out and share your feedback.
0.94
t3_qzidz5
1,637,576,448
LanguageTechnology
What next now?
Hey guys, I just wanted to get to know about the future career possibilities in the field of NLP. I am from India, so a brief background about me: 1. Graduated from STEM field (Mechanical Engineering to be exact) in 2021 2. I have a brief amount of research experience: published 3 papers as a first author at EACL, ACL, SemEval, and 2 more papers currently submitted in ACL. 3. I have worked in UCLA NLP lab and currently working as a machine learning intern in a startup. I am applying for Fall 2022 PhD this year but I am mostly applying to ambitious places. In case I don't get an admit that I am might not like, I will apply for Fall 2023 admissions. Now my questions are about what can I do for the next year that might benefit my profile: 1. Research based positions are already scarce in the industry and generally go for graduate students. Does my research experience compensate for that when apply to such places? 2. I have heard of programs like google pre doctoral, Microsoft Research fellow and Allen young investigator. But these are all super competitive I think. What experience would make me be a competitive candidate at such places? Are there any other programmes like that where I might stand a chance with my current profile? 3. How do I hunt for research opportunities in the industry at my current level? 4.If not research opportunities, which type of roles should I prefer? (Like data scientist, data engineer, software engineer etc.)
0.9
t3_qzcg5a
1,637,553,536
LanguageTechnology
Looking for examples of conversational chatbot companies with recorded demos.
I'm doing some research into currently successful chatbot companies (both voice and text) and am looking for chatbot companies that have their chatbots in recorded demos working fully. One I found is [Brooke.ai](https://Brooke.ai) which works in the car dealership appointment setting industry as a customer service bot for inbound calls and their demo can be found [here](https://www.brooke.ai/?utm_source=google&utm_medium=cpc&utm_campaign=chatbot&creative=538797336365&keyword=chatbot%20service%20providers&matchtype=b&network=g&device=c&gclid=Cj0KCQiA-eeMBhCpARIsAAZfxZA5yf_BC4bJ46ILeeVlQXpXjdSLVm5fg-kAej4oHTUCJMg27vRI8KoaAn5nEALw_wcB#) and also on [youtube](https://www.youtube.com/watch?v=OARPZvWlUnk). I tried looking for other companies that have demos as well but couldn't find many that have pre-recorded demos of their ai chatbot product working fully, or are mostly sequence-based chatbot building software such as ManyChat or ChatFuel. Could you guys recommend chatbot companies that have ai with intent recognition software like Google Duplex or Amazon Lex? If they have demo recordings I can listen to or view that would be great. Looking for companies that have been successful in implementing AI in chatbots.
1
t3_qza06w
1,637,545,600
LanguageTechnology
Lojban, constructed languages and NLP
Lojban is a constructed language that aims at clarity. As a language it is less syntactically ambiguous, contains no homophones and has many other features intended to reduce both semantic and grammatical ambiguity. The big problem with trying to train an NLP on Lojban is, of course, is corpus size and scale. Although many side by side translations texts into Lojban exist, they have nothing like the scope that would be necessary to teach a neural net a language. I think it's entirely possible that, if we did have a large enough corpus, a computer trained on Lojban might be able to achieve things a standard machine learning setup can't. Still we run into that fundamental barrier, corpus size. I can't help but think though that there is *something here*\- an opportunity for a skilled research team in this area, if only they could locate it. Perhaps some intermediate case, like Esperanto, might be more possible?
1
t3_qz6niv
1,637,535,232
LanguageTechnology
[Advice] What are some ways to engage with Academia without a Phd?
Hi - a big fan of this subreddit! I am an applied NLP researcher in the industry with a masters. I have a PhD offer but I am in double minds about doing a PhD - because of publishing culture/duration of a phd in the US etc. However, I very much enjoy keeping up with the recently published work and seeing how they can be tweaked and applied to real-world scenarios. What are some ways in which I can continue engaging with the academic community without doing a phd? I some how feel that one is not valued as much without a phd (even with equivalent industry experience) so wanted to get opinion from either sides. Thanks!
1
t3_qz3zw0
1,637,527,552
LanguageTechnology
Resource list for NLP beginners from a Meta AI ML researcher
During my morning scroll of tech twitter, I came across this [round-up of resources for anyone new to NLP](https://elvissaravia.substack.com/p/my-recommendations-for-getting-started), written and shared by an ML research at Meta AI. I thought this community might be interested since I've seen quite a few posts by people looking for advice on where to start. I haven't personally used all of the books and other resources that he recommends, but the ones on his list that I have used -- the Bender book, the Jurafsky book, and the Manning lectures -- were all excellent. Moreover, I strongly agree with the approach of "studying the fundamentals", including linguistics concepts, before jumping straight to the ML. Anyway, I hope it's useful to someone! "Elvis" (the author) is also very worth following on Twitter. Maybe all of you already do this, but I learn so much from following NLP academics and researchers/engineers in industry.
0.88
t3_qyyb0m
1,637,511,424
LanguageTechnology
GUI app for text processing?
I’m picturing a desktop application where you can highlight some text and say a command like, “tokenize these words”, and the list containing various text elements makes each of the highlighted words a unique element of the list. Or you can highlight a region and say “segment this text”, and it does the same but for sentences. Is there any way to do this?
0.66
t3_qywnxp
1,637,506,688
LanguageTechnology
Better segmentation than NLTK.sent_tokenize()
I am segmenting text in Juno, a Jupyter notebook iOS app. They don’t support Spacy at the moment. NLTK.sent_tokenize does not segment sentences perfectly, for me. I am thinking my only choice is to write a custom segmentation rule, unless anybody knows a different library with a high quality, AI-intelligent segmenter that can comprehend where the boundaries of sentences are, even if the text is not perfectly formatted. Thanks!
0.83
t3_qyw34c
1,637,504,896
LanguageTechnology
[P] Pyconverse - Conversational Text transcript analysis library
Github project link: [pyconverse](https://github.com/AnjanaRita/converse) Conversation analytics plays an increasingly important role in shaping great customer experiences across various industries like finance/contact centres etc.. primarily to gain a deeper understanding of the customers and to better serve their needs. This library, *PyConverse* is an attempt to provide tools & methods which can be used to gain an understanding of the conversations from multiple perspectives using various NLP techniques. I have been doing what can be called conversational text NLP with primarily contact centre data from various domains like Financial services, Banking, Insurance etc for the past year or so, and I have not come across any interesting open-source tools that can help in understanding conversational texts as such I decided to create this library that can provide various tools and methods to analyse calls and help answer important questions/compute important metrics that usually people want to find from conversations, in contact centre data analysis settings. ​ Things that can be done with this library: 1. Emotion identification 2. Empathetic statement identification 3. Call Segmentation 4. Topic identification from call segments 5. Compute various types of Speaker attributes: (word counts/number of words per utterance/negations etc., Identify periods of silence & interruptions, Question identification, Backchannel identification, Assess the overall nature of the speaker via linguistic attributes and tell if the Speaker is: Talkative, verbally fluent, Informal/Personal/social, Goal-oriented or Forward/future-looking/focused on past, Identify inhibition.) Please give it a try and share your feedback.
1
t3_qyt2p8
1,637,494,016
LanguageTechnology
How to extract prepositions from parallel texts?
Hello. I have parallel texts in English-German and English French. EN.txt = "The hat is on the table.\n The picture is on the wall.\n The bottle is under the sink." DE.txt = "Der Hut liegt auf dem Tisch.\n Das Bild hängt an der Wand.\n Die Flasche ist unter dem Waschbecken." FR.txt = "Le chapeau est sur la table.\n La photo est sur le mur.\n La bouteille est sous l'évier." I would like to extract the prepositions from the EN-DE, EN-FR sentence pairs and create some kind of frequency counter in a dictionary of the pairs. Something like this, I guess: EN_DE = {"on":{"auf":1, "an":1}, "under":{"unter":1}} Eventually, I'd like to create an alignment matrix or a heatmap with the frequencies. ​ Some questions: 1. Is this feasible? 2. How should I go about doing this? Algorithmically, I think I need to tokenise the sentences, POS tag them using Stanza, **figure out what prepositions in English Sentence X align with what prepositions in German Sentence X**, and then update the counter dictionary. 3. The bold part is what I am particularly having trouble with. Any idea how I can best do that? 4. I am thinking of doing this in a pandas DataFrame. Is that a good idea? 5. Any other approaches to this problem? Please, any advice or suggestions would be much appreciated. I am very much a beginner programmer. I'm a linguist trying to use computational tools to analyse my data, but I feel like I might be out of my league. Thank you in advance for your suggestions.
0.9
t3_qysddr
1,637,490,944
LanguageTechnology
Attributing dialogue to specific characters
Hi all, I'm currently working on a side project trying to analyse the sentiment of book characters in The Stormlight Archive, and as part of it need to determine who's saying what in a given dialogue. E.g. in the following >“I heard the guards talking,” the slave continued, shuffling a little closer. He had a twitch that made him blink too frequently. “You've tried to escape before, they said. You have escaped before.” > >Kaladin made no reply. > >“Look,” the slave said, moving his hand out from behind his rags and revealing his bowl of slop. It was half full. “Take me with you next time,” he whispered. “I'll give you this. Half my food from now until we get away. Please.” As he spoke, he attracted a few hungerspren. They looked like brown flies that flitted around the man's head, almost too small to see. > >Kaladin turned away, looking out at the endless hills and their shifting, moving grasses. He rested one arm across the bars and placed his head against it, legs still hanging out. > >“Well?” the slave asked. > >“You're an idiot. If you gave me half your food, you'd be too weak to escape if I were to flee. Which I won't. It doesn't work.” > >“But—” > >“Ten times,” Kaladin whispered. “Ten escape attempts in eight months, fleeing from five different masters. And how many of them worked?” I'd want to get an output something like { 'Kaladin': [ “You're an idiot. If you gave me half your food, you'd be too weak to escape if I were to flee. Which I won't. It doesn't work.”, "Ten times", “Ten escape attempts in eight months, fleeing from five different masters. And how many of them worked?” ], "Slave": [ “Look,”, “Take me with you next time,”, [...] ] } Assuming that identifying the characters themselves isn't an issue, and I have a ton of training data (it's a long book), what are some good methods to do this? The literature seems really sparse, except for this one paper [http://www.cs.columbia.edu/\~delson/pubs/AAAI10-ElsonMcKeown.pdf](http://www.cs.columbia.edu/~delson/pubs/AAAI10-ElsonMcKeown.pdf) which doesn't seem very easily transferable and is 11 years old. My current thought is that I can treat it as a classification task: >label = attribution\_pipeline(quote=''' "You're an idiot. If you gave me half your food, you'd be too weak to escape if I were to flee. Which I won't. It doesn't work.", context=\[full paragraph\]) but this seems a bit... overly-general as a solution, and I'm not sure if I'm just missing papers that discuss this. Any thoughts much appreciated!
1
t3_qyi237
1,637,452,160
LanguageTechnology
Auto-Translator for Preserving a Semitic Language
Long story short, there's a dying Semitic Language with native speakers still alive, Assyrian Neo-Aramaic, and I'm looking to increase the amount of data out there so I could hopefully train an Assyrian-English translation model. Context: Assyrian is a modern dialect of Aramaic. There is virtually no data out there I could process into translated sentence pairs to train any sort of deep learning model. Since I have access to native speakers (my family and friends), I want to develop a software that selects/generates English sentences then has volunteers provide a translation. ​ FEW QUESTIONS ABOUT THIS! 1. The language is written in it's own script [https://en.wikipedia.org/wiki/Syriac\_alphabet](https://en.wikipedia.org/wiki/Syriac_alphabet). Writing in the Syriac script is FAR from standardized as there are sooo many dialects and there's no standard system of spelling. Also, I'm not sure how well autoML stuff works on non-Latin characters ([https://cloud.google.com/translate/automl/docs/prepare](https://cloud.google.com/translate/automl/docs/prepare)). Should I ask volunteers to give translations in an English phonetic spelling? 2. How much sentences would I need to train an effective translation model? Let's say I have a team of 10 native speakers who devote 30 minutes a day for translating sentences, would this produce enough training data even? And given that there is no standard spelling, translations are going to be super noisy, as in the same words in Assyrian are going to be transliterated in many different ways. 3. How should I pick which English sentences to ask speakers to translate? Should this be randomly generated? Should they be randomly selected from English books? Would it be more useful to have translations of collections of sentences within a same context rather than stand-alone sentences? Thank you so much, this project means a lot.
1
t3_qyfyez
1,637,445,760
LanguageTechnology
Which method/model to opt for while identifying semantic similarity?
I have a text classification dataset of registered issues. Now within each category of issues there are specific issues that show similar pattern. How can I identify those sub categories within the categories. I dont have any means to manually categories each sub category (rather its impractical). All I understand is that this problem falls under unsupervised learning. I have already performed the text classification using BERT and it works well enough.
1
t3_qyba90
1,637,432,064
LanguageTechnology
WEBNLG challenge 2017 on Google Colab error
Hi guys, i'm data science student and i'm newbie in NLP field. For my master degree thesis, i need to learn the basic of NLP problem so i'm trying to follow the webnlg challenge 2017 tutorial ([https://webnlg-challenge.loria.fr/challenge\_2017/](https://webnlg-challenge.loria.fr/challenge_2017/)).However, i am not familiar with torch and unix and i can't understand how i can run this line of code: th preprocess.lua \ -train_src <data-directory>/train-webnlg-all-delex.triple \ -train_tgt <data-directory>/train-webnlg-all-delex.lex \ -valid_src <data-directory>/dev-webnlg-all-delex.triple \ -valid_tgt <data-directory>/dev-webnlg-all-delex.lex \ -src_seq_length 70 \ -tgt_seq_length 70 \ -save_data baseline Right here i put my Google Colab Notebook with all my steps: [https://github.com/dariodellamura/WebNLG-Challenge-2017-test/blob/main/nlg\_pipeline.ipynb](https://github.com/dariodellamura/WebNLG-Challenge-2017-test/blob/main/nlg_pipeline.ipynb). i get this error: bash: cannot set terminal process group (72): Inappropriate ioctl for device bash: no job control in this shell th 70 \ -tgt_seq_length 70 \ -save_data baseline /content/torch/install/bin/luajit: cannot open preprocess.lua: No such file or directory stack traceback: [C]: in function 'dofile' ...tent/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk [C]: at 0x55f8357a6570 How can i solve this? Thanks all.
0.67
t3_qy3alk
1,637,405,824