sub
stringclasses
4 values
title
stringlengths
3
304
selftext
stringlengths
3
30k
upvote_ratio
float32
0.07
1
id
stringlengths
9
9
created_utc
float32
1.6B
1.65B
LanguageTechnology
John Snow Labs Spark-NLP 3.4.0: New OpenAI GPT-2, new ALBERT, XLNet, RoBERTa, XLM-RoBERTa, and Longformer for Sequence Classification, support for Spark 3.2, new distributed Word2Vec, extend support to more Databricks & EMR runtimes, new state-of-the-art transformer models, bug fixes, and lots more!
nan
0.79
t3_rwpstk
1,641,397,504
LanguageTechnology
A 5 million source code file dataset
nan
0.67
t3_rwjtna
1,641,378,432
LanguageTechnology
Advice for stemming historical text
So I'm working on some early English text. For example, sometimes "up" is spelled "vp", or "himself" might be "himselfe"... or it might not. Is there any advice or good practice for how to handle stemming/lemmata etc.? Has anyone got experience doing word embeddings with this kind of data?
0.93
t3_rwihy9
1,641,373,184
LanguageTechnology
Augmented SBERT for domain transfer
Hi all, I put together [an article on applying AugSBERT](https://www.pinecone.io/learn/domain-transfer/) from Thakur, Reimers, etc for domain transfer tasks. Great for improving sentence transformer performance in a domain where we don't have data, but we *do have* data in another similar domain. I hope it's useful, let me know if you have any questions, ideas, etc - thanks!
1
t3_rvy09b
1,641,312,768
LanguageTechnology
NLP to Process Academic Citations
I have to process undergraduate and postgraduate student essays using spaCy. One of my first step is to remove citations, both narrative and parenthetical ones. And I am using regex to do this. My regex is getting longer and longer and becoming very unwieldy. Moreover, I am assuming students are using APA 7th and not earlier versions or other styles entirely. I am unable to get good results using NER or POS so have to rely on regex. Are there any python NLP packages that will recognise academic citations, both narrative and parenthetical ones? E.g. "Lee (1990) said ...", "... in the study conducted (Lee, 1990)".
1
t3_rvp049
1,641,282,560
LanguageTechnology
Researchers Propose A Novel Parameter Differentiation-Based Method That Can Automatically Determine Which Parameters Should Be Shared And Which Ones Should Be Language-Specific
In recent years, neural machine translation (NMT) has attracted a lot of attention and has had a lot of success. While traditional NMT is capable of translating a single language pair, training a separate model for each language pair is time-consuming, especially given the world’s thousands of languages. As a result, multilingual NMT is designed to handle many language pairs in a single model, lowering the cost of offline training and online deployment significantly. Furthermore, parameter sharing in multilingual neural machine translation promotes positive knowledge transfer between languages and is advantageous for low-resource translation. Despite the advantages of cooperative training with a completely shared model, the MNMT approach has a model capacity problem. The shared parameters are more likely to preserve broad knowledge while ignoring language-specific knowledge. To improve the model capacity, researchers use heuristic design to create extra language-specific components and build a Multilingual neural machine translation (MNMT) model with a mix of shared and language-specific characteristics, such as the language-specific attention, lightweight language adaptor, or language-specific routing layer. [Continue Reading](https://www.marktechpost.com/2022/01/03/researchers-propose-a-novel-parameter-differentiation-based-method-that-can-automatically-determine-which-parameters-should-be-shared-and-which-ones-should-be-language-specific/) Paper: https://arxiv.org/pdf/2112.13619v1.pdf Github: https://github.com/voidmagic/parameter-differentiation
0.92
t3_rvkz91
1,641,268,736
LanguageTechnology
Doubt about a point in BERT paper
In the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf) it says that during training it mask a fraction of the words and replaces them with random words: > The training data generator chooses 15% of the token positions at random for prediction. If the i-th token is chosen, we replace the i-th token with (1) the \[MASK\] token 80% of the time (2) a random token 10% of the time (3) the unchanged i-th token 10% of the time. I can't wrap my head about the explaination it gives, can somebody point me somewhere about this part? EDIT: What I don't understand is the justification to do the random word thing.
1
t3_rvfi1w
1,641,253,120
LanguageTechnology
Open Source Chinese Language Thesaurus
Are there any open source Chinese language thesauruses? Akin to CEDICT, but with synonyms? I have an application that could really make use of something like that, and without one existing, we'll essentially have to do it by hand, which is fairly laborious.
1
t3_rvc54a
1,641,244,288
LanguageTechnology
Voice cloning + Language transfer == Clone yourself and speak a new language
nan
1
t3_rva3bz
1,641,238,912
LanguageTechnology
Hi, guys I just upload a video to YouTube about Romance languages compared to Latin Fruits. I would like you guys to check it out and leave a like. Thank you.. link in the comment https://youtu.be/H-Z3L9kGGjk
nan
0.5
t3_rv6ty2
1,641,230,464
LanguageTechnology
NLP: Hybridization of statistical approach and expert system ?
Hi everyone! I have a question for you. For context, we aggregate on a platform the various AI APIs on the market (GCP, Azure, etc.) and including NLP APIs (keyword extraction, sentiment analysis, NER, etc.). The idea is that a developer doesn't have to create accounts with different providers and can have them all on one API to test, compare and change whenever he wants. However, many customers ask us how to mix the "statistical" approach behind these APIs with expert systems and how to achieve hybridization. Do you have any idea how to do this? Thanks,
1
t3_rv4mf8
1,641,224,832
LanguageTechnology
Inserting documents into Postgres?
I have a postgres database that I want to use to store raw documents. These documents may contain lots of special characters. I'm trying to insert the documents into a postgres db and I keep getting syntax errors. Not sure what the best approach to this is. Here is the code I'm using with psycopg2 sql\_statement = """ PREPARE fooplan(text, text) AS INSERT INTO ocr (id, text) VALUES ($1, $2); EXECUTE fooplan({0}, {1});""".format(id ,text) ​ cur.execute(sql\_statement)
0.5
t3_rv4c4r
1,641,224,064
LanguageTechnology
[R] The Illustrated Retrieval Transformer (GPT3 performance at 4% the size)
nan
0.87
t3_rv2maj
1,641,219,200
LanguageTechnology
Faster keyword extraction
I’m using KeyBERT to extract 1000 keywords from a file. It was pretty slow when I did it for only 4 keywords. For 1000 it ran for almost 15 minutes before I terminated it, I believe it was processing the entire time but it’s just a massive computation. Can anyone advise me on speeding this up? I’m using a Digital Ocean Droplet. What specs do I need to do something like this in hopefully a few seconds? Are we talking 64-core CPU or a certain GPU or something? Or is there any advice on how I can be certain it’s still running, even after like 20 minutes? How long would you expect an execution like this to take and why? What is it about BERT that is so computation-intensive? Thank you
1
t3_ruzq2s
1,641,209,600
LanguageTechnology
Open Domain Question Answering Part-1 [BlenderBot 2.0]
nan
0.67
t3_ruy7kl
1,641,203,840
LanguageTechnology
NLP tool for simple sentence correction in English (i.e. grammer)?
Hi all. A little background: my mother is a Chinese immigrant who is always lacking self-esteem in her ability to speak "correct" English. Whenever she sends a text over to someone who is a native English speaker, she always bugs me to correct her sentences so it sounds more "natural." Her English is honestly fine at a conversational level, but could definitely use some editing. I am wondering if there are NLP tools out there that can help my mom with this? Like if someone types a sentence like "Hi, I almost done" we can change it to something like "Hi, I *am* almost done"? Thanks in advance.
0.92
t3_ruwav3
1,641,196,032
LanguageTechnology
MediaRecorder based smartphone recording vs dedicated app
Hi all! I'm proving out an idea for emotion detection using smartphone recordings. Ideally I would like to gather recordings using a web-based application and the MediaRecorder API with smartphones (targeting iOS primarily). Does anyone have experience with doing so? Are the results good enough to work with, or am I better off working on a dedicated app with more control over recording?
0.9
t3_rtngyl
1,641,056,768
LanguageTechnology
Next steps for after classification
Hello everyone! After lots of research and failure, I finally was able to use BERT for classifying text in my dataset. However, I feel like a dog that finally caught the car he has been chasing, because I am not sure what to do next. I had a series of questions that I want to pursue but was hoping for a professional opinion. First, I want to be able to look at some metrics for seeing how well my model performed. What are good metrics for a multiclass classification task? I know for a fact my classes are imbalanced, so what would be the best way to move forward with this? In short, what do you ask yourselves once the model is done training and what do you do to evaluate it? How can I improve? I am a nuclear engineer by trade and NLP/DL is still a very new concept and I was hoping to get insight from the masters out there. Thanks in advance and happy new year!
0.88
t3_rt6pvi
1,640,995,328
LanguageTechnology
Having trouble with stemming (NLTK library)
I tried using different packages but they all still just return a "None" value whenever i try to stem a word. Is it because of my python version ?
0.6
t3_rsx4cz
1,640,966,912
LanguageTechnology
High-leveled APIs ruined NLP?
It seems like things like HuggingFace and Spacy and whatever have done some harm to NLP as a whole. for instance, I've heard NLP engineers have less pay potential compared to computer vision folk due to most models just being run through their pipelines. also, it seems difficult to find tutorials post 2018 on topics like NER and such from scratch. Everything is getting abstracted to API's and fewer people are learning things from the ground up. What do you think?
0.42
t3_rsrwqa
1,640,949,888
LanguageTechnology
Custom NER with spaCy v3 Tutorial | Free Web-based NER Data Annotation Tool
nan
0.71
t3_rsbokc
1,640,897,024
LanguageTechnology
Healthsea: an end-to-end spaCy pipeline for exploring health supplement effects
nan
0.9
t3_rs6ger
1,640,883,712
LanguageTechnology
[Project] Figuring the "sophistication" level of a text, similar to Grammarly.
Hi, I have a project in mind and the first "mini-project" within it is to assign a Score to a text depending on the depth of the vocabulary. Similar to what Grammarly does. I know I have to use a dictionary, but beyond that I don't have much. A bonus would be to also assign a "Class" to the text depending on the vocabulary used; ex: While a Scientist and a Writer might have very similar "depth" Scores, their vocabularies are not the same, the program should assign to which "Class" does the text belong. But this might be a bit hard.
1
t3_rs5rqj
1,640,881,920
LanguageTechnology
Teaching transformer "sentence" orders
Hi there, I'm trying to tackle quite a difficult problem with the help of sentence-transformer-models. Ive got a bunch of JSON (alternatively YAML) files from different domains, which contain basically entities as JSON schemas consisting of data fields and descriptions. The entities can be ordered in kind of a hierarchical structure, which is not really strict though and may differ from file to file. I assume that there exist common patterns between those files, precisely how the entities can be ordered in a semantically "meaningful" way (a human can understand the structures based on the descriptions). I would like to either **a) Cluster the schemas to identify similarities between those entities** What I tried: clustering the descriptions with KMEANS and SentenceTransformers. Problems here: \- If I use only the descriptions they get clustered mostly by domain \- If I try to cluster the "raw" JSON, most models don't find any similarity (tried also CodeBerta etc) => My idea here would be to fine-tune a model which encodes always two JSON parts as sentence input and I use the description similarity to generate either a classification score or even NLI scores, to train the model on this data, would this be a valid approach or what could be better ideas? **b) More of a crazy but interesting idea: If I assume that the "structure" can be modeled as a "sentence" which consists of "words" (embedded entities) than probably some sort of model could learn those "sentences".** => How to create "words" from sentences? I thought about creating sentence embeddings for all entities, and then building "entity-sentences" from the CLS-tokens? How to build a classifier for such "sentences"? Are there any good approaches or is there any previous work done? => Does it make sense to create the model from scratch or would it be helpful to fine-tune an existing model with this approach? => Would it make sense to look at a completely different sort of ML technology?
1
t3_rs2c2c
1,640,872,448
LanguageTechnology
Clause segmentation
I’m trying to learn how to segment text into significant clauses. Here’s a promising approach: https://stackoverflow.com/questions/65227103/clause-extraction-long-sentence-segmentation-in-python chunks = [] for sent in doc.sents: heads = [cc for cc in sent.root.children if cc.dep_ == 'conj'] What are the children of a sentence’s root? Does that mean every possible lowest level syntactic element like “D”, “Quantifier”, “N”, etc.? So the author decided to find conjunctions? What about just looking at the syntax tree and breaking it on a lateral level - like the three elements on one level down from the root, make those the segments? Or what about just pure machine learning for this? Just train a custom segmenter by showing it where you would break sentences, and don’t do any explicit syntax parsing? for head in heads: words = [ww for ww in head.subtree] for word in words: seen.add(word) chunk = (' '.join([ww.text for ww in words])) chunks.append( (head.i, chunk) ) unseen = [ww for ww in sent if ww not in seen] chunk = ' '.join([ww.text for ww in unseen]) chunks.append( (sent.root.i, chunk) ) chunks = sorted(chunks, key=lambda x: x[0]) for ii, chunk in chunks: print(chunk) Is this just going to the break-points in the sentence the program identified and pulling out all the words consecutively? Or, what is this doing? Thank you
0.84
t3_rs22ru
1,640,871,680
LanguageTechnology
[P] Ecco - Language model analysis and visualization toolkit
nan
0.9
t3_rrydxi
1,640,859,008
LanguageTechnology
[Explainer] Inter-Annotator Agreement: An Introduction to Krippendorff’s Alpha
nan
0.75
t3_rrpbty
1,640,829,440
LanguageTechnology
Any tools to help with labeling your own data set?
So, say I'm attempting to label a training data set for a sentence classification model. What would the best tool to load a bunch of documents, have each document be split into sentences, and then show me each sentence so I can label it myself. Any ideas on what I should use?
0.72
t3_rrn0jg
1,640,823,040
LanguageTechnology
Q: Transformers - Query, Key and Value Vectors in "Attention is all you need"
Hi everyone! Can someone explain to me how query, key and value vectors are received from the input word embeddings to an encoder or decoder layer? I see how they (the q, k, v vectors or matrices respectively) are used in the multihead attention layer, but I dont understand where they come from. They have to depend on the input word embedding, but how? In the original transformer paper [(Attention is all you need)](https://arxiv.org/pdf/1706.03762.pdf) I only found those vectors mentioned in chapter 3.2: *An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key* ​ If anyone could help me answering this question, it would be great! EDIT: My thanks to /u/Brudaks, /u/boodleboodle and /u/mehtajineshs for clarification on this by providing explanations and resources. I do understand now, that the vectors depend on the output from previous layers and are received by multiplying previous layer output (or word embeddings, in case of first layer) with randomly initialized matrices, like other weights in an FF network for example are initialized randomly as well. And like weights in a feed forward network, the matrices for receiving QKV vectors are learned by backprop.
0.76
t3_rri6dm
1,640,810,624
LanguageTechnology
Baidu And PCL Team Introduce ERNIE 3.0 Titan: A Pre-Training Language Model With 260 Billion Parameters
With recent breakthroughs in AI, humans have become more reliant on AI to address real-world problems. This makes humans’ ability to learn and act on knowledge just as essential as a computer’s. Humans learn and gather information through learning and experience to understand everything from their immediate surroundings. The ability to comprehend and solve issues, and separate facts from absurdities, increases as the knowledge base grows. However, such knowledge is lacking in AI systems, restricting their ability to adapt to atypical problem data. Previous studies show that pre-trained language models improve performance on various natural language interpretation and generating tasks. A recent work of researchers at Baidu, in collaboration with Peng Cheng Laboratory (PCL), release PCL-BAIDU Wenxin (or “ERNIE 3.0 Titan”), a pre-training language model with 260 billion parameters. It is the world’s first knowledge-enhanced multi-hundred billion parameter model and its largest Chinese singleton model.  You can read the short summary here: [https://www.marktechpost.com/2021/12/29/baidu-and-pcl-team-introduce-ernie-3-0-titan-a-pre-training-language-model-with-260-billion-parameters/](https://www.marktechpost.com/2021/12/29/baidu-and-pcl-team-introduce-ernie-3-0-titan-a-pre-training-language-model-with-260-billion-parameters/) Paper: https://arxiv.org/pdf/2112.12731.pdf
1
t3_rrgrnf
1,640,807,040
LanguageTechnology
Extractive summarization
What model would you recommend for extractive summarization? I have a dataset of restaurant menus and I want to extract the dishes with their prices. So the input will be the entire menu and the output will csv like text with the dishes as the first column and their prices as the second. I was thinking of T5 but I just dabble in NLP maybe you have a better idea? Thanks
0.5
t3_rrehwy
1,640,801,408
LanguageTechnology
Suggestions for newbie trying to dabble in NLP
Hi everyone! I'm an experimental social science researcher who is trying to get into some very basic NLP as a supplementary skillset. I learned how to use LIWC (in a very short 4-week workshop) during my doctoral program, but haven't done anything related to NLP for at least 5 years. I've skimmed through some posts here and someone said "NLP has progressed a lot more from LIWC since the past couple years" so I'm trying to get reacclimated with NLP. Do you guys have any suggestions (youtube videos / websites / books) on where to start? My goal is first to learn how to do the most basic sentiment analysis and/or any other elementary analyses using R, and then once comfortable, gradually move on to more advanced topics (I consider myself a good self-learner :) ). Another question I had was whether there was anything similar to LIWC in R, but again others seem to have commented on this subreddit that there are better tools than LIWC these days..? Sorry for the really vague / general question - I would appreciate any comments or pointers!
0.92
t3_rr9wgp
1,640,789,504
LanguageTechnology
What is a Graph Neural Network?
nan
0.82
t3_rqu6xf
1,640,737,664
LanguageTechnology
Open Discussion: ways to prevent Voice Synthesis misuse
nan
0.63
t3_rqoz8d
1,640,723,584
LanguageTechnology
Rundown of Transformer Tokenizers and how to build them
Hi all, I'm working on a project to build a set of language models for the Maldivian language of Dhivehi. It's a lot of fun and super interesting, the first step (for me) has been building a tokenizer that handles the language and its unique Thaana script. I just published a [video](https://youtu.be/mjKqP3kRxbQ) and [article](https://towardsdatascience.com/designing-tokenizers-for-low-resource-languages-7faa4ab30ef4) ([link if you hit paywall](https://towardsdatascience.com/designing-tokenizers-for-low-resource-languages-7faa4ab30ef4?sk=c0c16de9eea7dbe1d2a9c106abf38e1a)) explaining the steps and each of the components in a tokenizer (eg normalization, pretokenization, decoding, etc). I hope some of you find it useful, lmk what you think - thanks!
0.72
t3_rqitqe
1,640,706,688
LanguageTechnology
Searching for irregular forms automatically
Hello all, I would like to ask you which kind of tools do you use to search for something that is not in the training model when you do a computational analysis. Let's say that I want to search for errors in a corpus (e.g. mispelled words), while the lemmatization procedure fails because these elements are wrong - how can you deal automatically with such events? I hope that the question is clear enough :)
1
t3_rqegda
1,640,693,376
LanguageTechnology
looking for help on approaching problem
I currently have a list of sentence fragments that loosely describe listing for sales for houses/apt/mansions etc. They might look something like this: *\[apartment, 4 glazed windows, wood floors and well insulated, with large pool\]* *\[large apartment, 4 bedrooms, 1 master bathroom, carpet everywhere but not in bathrooms\]* *\[baby room, 2 bed, half a washroom, crawlspace attic for storage, garden with swim area\]* I want to apply labels (keywords) to these fragments to "standardized" the language which I can then use to process later. Knowing to group the following is important: "large pool" --> has swimming pool" "garden with swim area" --> "has swimming pool" The "keywords" I might want to use for the examples: 1. \[apartment, 4 glazed windows, wood floors and well insulated, with large pool\] ---> \[apt, has\_floor, has\_pool\] 2. \[large apartment, 4 bedrooms, 1 master bathroom, carpet everywhere but not in bathrooms\]---> \[apt, has\_floor, has\_bedrooms\] 3. \[baby room, 2 bed, half a washroom, crawlspace attic for storage, garden with swim area\]---> \[has\_bedrooms, has\_attic, has\_pool\] I do not need to "capture" all the descriptions from the sentence fragments. And at least, I want to be able to grab the lowest hanging fruit first (right now I have nothing!) I see that I have some issues: 1. How do I break down these "sentence fragments"? So that analysis can be done? 2. How can I "group" text that shows up so that I know what categories I want to create? Even better, if groupings can be automatically created/suggested 3. Even if I have "labels" that I want to assign a set of fragments how do I train a model to actually do this? (Like if I spent 5 hours (which i have) labeling some very basic categories.... how do I use this?) One possible wrinkle I have, is that I do not care which "sentence fragment" correspond to which label. (when I labeled the dataset, I just said, does this sentence fragment correspond to these labels/keywords) - therefore it is difficult for me to map a "sentence fragment DIRECTLY to a group with heuristics" . In the end, I do not necessarily care (or know) which of the sentence fragments actually correspond to the label, just that this example should have the given labels. I hope my problem description makes sense, and looking for any type of directed help/ approaches. I have looked at "tokenization", "word count", "bag of words" etc but I am unable to understand it enough to see the full picture of how to use it. Any comments appreciated! \[language of choice python\]
1
t3_rqe7js
1,640,692,608
LanguageTechnology
Huggingface for glossary creation
Does anybody know of a leading AI model for glossary creation? I’m considering using Spacy for this but so far I found their entity recognition and even their segmentation to be good but not necessarily flawless. I could stick it custom trained models for sure, it honestly might not be that hard. I’m wondering if anybody has gone before me here, though. An auto-glossary creation tool at minimum should: 1. Recognise terms, not necessarily entities. Entities appear to be more trivial, like even just years and numbers come up sometimes. Terms are important keywords. 2. Retrieve context/example sentences from the source documents for each word.. AI is not strictly necessary for this, but it could be leveraged in deciding which sentence containing a term is most “representative”. Plus, AI would come in handy for lemma-matching - it should be able to search for any grammatical form of a word in source text, and not match “crudely” as in maybe a homonym of a word. 3. Ideally, it should auto-categorize terms (I’m planning on trying BERT to generate a “similarity score”, grouping terms with nearness to each other and then generating a label for that group). So: this is the project I’m currently working on. Has anybody already done something like this, ready to go? Thank you
1
t3_rqdq05
1,640,690,688
LanguageTechnology
An awesome list about vector similarity search : find open source vector search libraries, service, platform service and research papers
nan
0.87
t3_rqaplb
1,640,679,040
LanguageTechnology
I am looking for something like synthesia.io
I am searching for a natural text to speech or voice cloning program at least of the quality of [synthesia.io](https://synthesia.io) , Don't need the video part though. Preferably open source or something cheap.
1
t3_rq2adm
1,640,652,928
LanguageTechnology
Minimizing relation types in knowledge graphs
Has work been done on selecting a minimum subset of relation types? Ideally it could be reduced to just one. It would probably be one of the first words that that children learn. Something like "is" or "has". Having just one type of relation would greatly simplify the representation. "Is" could represent categories "cat is animal" "Has" could represent parts "cat has tongue" So what I'm thinking is that "has" would be a prime candidate as a single sufficient relation type, because categories and subcategories could be determined easily without any relation between entities: if one entity (animal) has a subset of relations that another entity (cat) has, then it means that "cat is animal". To take the other route and use only "is" (categories) to infer parts from it -- I don't know how it could be easily done. Anyway, perhaps it is possible to use any 1 relation and infer all others based on that, the question is which is more natural for language as we commonly use it.
1
t3_rpuft7
1,640,631,424
LanguageTechnology
Looking for people to learn Python Coding With
Hi there, I've recently graduated with a BA in Linguistics and I'm currently pursuing a career in Computational Linguistics. I plan on applying to an Msc in a CompLing related degree in a year or two, but I'm currently taking some time off to relax and also learn Python Coding and polish my math skills. However, learning Python from scratch and also learning it independently has been really difficult as I find myself stuck often with nobody that I could talk to about Python, and also I find myself lacking the motivation to keep going. It would be really nice and helpful if I had a few people I can go to regarding Python-related things. We could motivate each / help each other out etc. Please let me know if you're interested!
0.86
t3_rpmr79
1,640,609,024
LanguageTechnology
Microsoft Introduces the Next Generation of the Conversational Language Understanding Client Library
The demand for intelligent technologies that can interpret brief text has increased. As increasingly sophisticated solutions are produced, there is a greater need to improve and facilitate the creation of these complex situations. These scenarios are intelligent customer assistance bots to independent computers that interpret human input. The Language Cognitive Service has opted to employ a multilingual transformer-based paradigm to deal with such problems. When using this model, customers will notice a considerable increase in performance over the old Language Understanding Service (LUIS). Microsoft has released the next generation Conversational Language Understanding client library, allowing developers to use the Azure Cloud Conversational Language Understanding service to train models and use them in applications to provide related language services. Developers can use .NET or Python, and these libraries are currently under beta development. Quick Read: [https://www.marktechpost.com/2021/12/27/microsoft-introduces-the-next-generation-of-the-conversational-language-understanding-client-library/](https://www.marktechpost.com/2021/12/27/microsoft-introduces-the-next-generation-of-the-conversational-language-understanding-client-library/) * [CLU documentation](https://docs.microsoft.com/azure/cognitive-services/language-service/conversational-language-understanding/overview) * [.NET reference documentation](https://docs.microsoft.com/dotnet/api/Azure.AI.Language.QuestionAnswering?view=azure-dotnet) * [Python reference documentation](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations) Microsoft Blog: https://devblogs.microsoft.com/azure-sdk/introducing-the-next-generation-of-the-conversational-language-understanding-client-library/
0.86
t3_rpjdav
1,640,595,968
LanguageTechnology
Romanian word embeddings
Hello everyone!  Did you hear about word embeddings? Nooo? Then you need to learn about it. But if seriously, I'm in process of create some corpus for word embedding for romanian language (and I know they exist a lot, but they are not mine). That why I decide to start this project. What are the goals I am pursuing? 1. The text must be clean; 2. To be learned on a lot of text; 3. And the corpus must be accurate. Here you can see some of them: [https://github.com/BlackKakapo/Romanian-Word-Embeddings](https://github.com/BlackKakapo/Romanian-Word-Embeddings) The rest will appear in the near future, and I will try to do them as soon as possible. Maybe some changes and fixes will appear, but I'll keep you posted. And of course you can leave a comment, what you like or dislike. I will be very grateful. Respectfully
0.67
t3_rphw8k
1,640,590,336
LanguageTechnology
How did you advance your NLP career?
Dear all, If there are any NLP/ML engineers, DS, or researchers out there, I could really use some advice. I am graduating from my MS in Economics with a full-time job lined job as a DS at a well-known fintech company. However, it is driving me crazy to find a clear path forward to pursue a more NLP-involved job down the line. Here is what I currently have that can be classified as NLP "experiences": 1. Past Internships! I have done anything from Product management intern for data products powered by NLP to Management Consultant doing research on the data collection strategies that a client could take to improve their NLP classification outcome 2. Research! I am writing a paper with researchers from NLP for applying NLP techniques to public policy related documents and is due to publish in the next couple of months 3. Current job! The team that I am currently on and hired into (that I have been interning on) uses a lot of NLP for insights discovery. We also plan on launching a large scale NLP product down the line which I will be very involved in given our very lean corporate structure Why I think I will have a hard time advancing in the field: 1. I do not have a CS undergrad or MS in CS 2. My background in economics dictated that I am good at math but not at linguistics 3. I do not come from a hyper prestigious school like Stanford or MIT but a mid-tier school in the East Coast (US) I feel everyone in the field is so overqualified for what they are doing (granted people may just be very good imposters)! I have no clue what to do ??? Should I go get an MSCS to compete down the line? How does moving up in NLP careers work? Can any folks shine some light on a very confused young person! I will literally take any suggestions or advice haha. thank u y'all!
0.91
t3_rp96xk
1,640,562,560
LanguageTechnology
January 5, 2022 online: "Compositional Natural Language Processing on Quantum Computers"
January 5, 2022 online: "Compositional Natural Language Processing on Quantum Computers" with Konstantinos Meichanetzidis, [Cambridge Quantum](https://cambridgequantum.com/) & [Quantinuum](https://www.quantinuum.com/). Info & RSVP: [https://www.meetup.com/NY-NLP/events/282107959/](https://www.meetup.com/NY-NLP/events/282107959/) \#NLProc #QuantumComputing
0.67
t3_royiix
1,640,530,816
LanguageTechnology
creature_feature: Composable N-Gram Combinators that are Ergonomic and Bare-Metal Fast
nan
1
t3_rovnon
1,640,520,064
LanguageTechnology
What are some good stories from the history of NLP?
I've been Wikipedia-diving some of the history of NLP recently, and I'd like to know if anyone has any interesting stories about researchers/experiments in the field. You know, like when your high school history teacher goes on a random tangent about all the different torture methods throughout the ages. Thanks!
0.78
t3_ro6v4v
1,640,425,600
LanguageTechnology
How to measure accuracy of a generative chatbot model
Hi, How to measure the accuracy of a generative chat bot or any generative model? Is it possible?
0.92
t3_rnnw4s
1,640,357,760
LanguageTechnology
People with degrees in Language Technology, what are you doing now?
I'm (hopefully) starting a master's in Computational Linguistics in the fall, and I'm curious to know what people who have done an undergrad/master/phd in cl, nlp, or even just compsci with an nlp focus end up actually doing after their degrees.
0.93
t3_rn9kbo
1,640,305,792
LanguageTechnology
Computational grammar checking
I’ve been interested in this for a long time, hoping to finally crack the code here. In Swedish an adjective can be determined or not. Like in English the article “the” vs “a”. This is a more obscure case, a phrase in a product catalog “On hard floor”. Not sure what grammarians would say is going on here. It’s a general case so I would say it’s non-determined and they dropped the “a” just for brevity. On the other hand it is more comparable to the general case which commonly uses the plural in English: “on hard floors”. I would like to have a convenient system to check what is done in Swedish without just leafing through grammar websites and so on. I want to access a most convenient Swedish corpus - not a database requiring a sign up but just an easily downloadable dataset, maybe Kaggle?, or as part of some software package like Spacy. Then I want to execute a formula like “show me matches of sentences of the form “preposition determined adjective noun””. I can develop it from here but this would be a good start. Does anyone have a suggestion for an accessible corpus with syntax parsing and searching? Thank you very much
1
t3_rmrmk1
1,640,250,368
LanguageTechnology
OpenAI Researchers Find Ways To More Accurately Answer Open-Ended Questions Using A Text-Based Web Browser
Long-form question-answering (LFQA), a paragraph-length answer created in response to an open-ended question, is a growing difficulty in NLP. LFQA systems hold the potential to become one of the most important ways for people to learn about the world, yet their performance currently lags behind that of humans. Existing research has tended to concentrate on two key aspects of the task: information retrieval and synthesis. Researchers at OpenAI have recently developed WebGPT. They outsource document retrieval to the Microsoft Bing Web Search API and use unsupervised pre-training to produce high-quality synthesis by fine-tuning GPT-3. Rather than striving to improve these factors, they concentrate on integrating them with more consistent training goals. The team leverages human feedback to directly enhance the quality of answers, allowing them to compete with humans in terms of performance. In this paper, the team offers two significant contributions. They create a text-based web-browsing environment that can be interacted with by a fine-tuned language model. This enables the use of general approaches like imitation learning and reinforcement learning to improve both retrieval and synthesis in an end-to-end manner. The team also creates replies with references, sections collected by the model when exploring web pages. This is critical because it allows labelers to assess the factual accuracy of answers without having to engage in a time-consuming and subjective independent research procedure. Quick Read: [https://www.marktechpost.com/2021/12/22/openai-researchers-find-ways-to-more-accurately-answer-open-ended-questions-using-a-text-based-web-browser/](https://www.marktechpost.com/2021/12/22/openai-researchers-find-ways-to-more-accurately-answer-open-ended-questions-using-a-text-based-web-browser/) Paper: [https://arxiv.org/pdf/2112.09332.pdf](https://arxiv.org/pdf/2112.09332.pdf) Open AI Blog: https://openai.com/blog/improving-factual-accuracy/
0.94
t3_rmmn9e
1,640,232,448
LanguageTechnology
How to make DialoGPT output random responses given the same query
Hi, I am making a chatbot with the use of dialoGPT, but I want it to give different responses even if the same question is asked. eg: * How are you doing -> I am good * How are you doing -> I am doing good * How are you doing -> I am fine, how are you something like that, so it doesn't seem repetitive. This question might be stupid tho 😅😅
1
t3_rmbz8l
1,640,199,168
LanguageTechnology
Do you think that Large Language Models could be used to generate Knowledge Graphs?
Do you know of any such experiments? I keep reading about LLMs using external memory resources, but could they also be used to generate resources such as Knowledge Graphs on a huge scale? Edit: preliminary results from a little experimentation with GPT-3 (davinci-instruct) **Prompt** knowledge graph described by a list of relations: finger -> part of -> hand finger -> subclass of -> digit music -> subclass of -> sound Earth -> instance of -> terrestrial planet green -> subclass of -> color pathogen -> opposite of -> nonpathogenic organism color -> subclass of -> property music -> part of -> culture culture -> opposite of -> nature
1
t3_rm5pzm
1,640,181,376
LanguageTechnology
Custom Named Entity Recognition (NER) for identifying CVs.
I am thinking of creating model for extracting entities in a cv such as 1. Name 2. Address 3. Institute 4. Degree 5. Skill 6. Company 7. School 8. Designation 9. Society - Ex: sport clubs, school societies… . In spaCy there are a very limited no of entities. What about training a model with these data ?
0.87
t3_rm1jdc
1,640,165,120
LanguageTechnology
Anyone has experience with Dataiku?
Trying to choose between Dataiku and Databricks, wanna know if anyone have used both and have any preference?
1
t3_rlxxxv
1,640,150,784
LanguageTechnology
Automatically categorised keyword extraction
Standard tools I know for keyword extraction are KeyBERT, PyTextRank, and Spacy’s language object which automatically recognised “entities”. I would like to automatically categorise keywords. I am considering making my own algorithm or adding on a step after the above keyword extraction. I believe it needs to cluster terms in some way - general semantic relatedness like WordNet, or a graph algorithm like textrank, or a similar statistical relationship to its lexical environment, maybe by comparing the BERT-generated vectors for each term and then grouping anything with a similarity above a certain score. Then it needs to guess a category name. Maybe BERT could scan through a list of words (from the text or in general) to see which one scores highest, in terms of relevance? I can think of two ideal scenarios: - tokenize the text - extract the key terms by mathematically noticing how terms cluster together in terms of cooccurrence. Put them in those groups and pull a term from the source text that is “representative”, it correlates highest with all of them. Or: - get keywords with Spacy (I still don’t know how their method works) - cluster their similarity using a BERT score (as mentioned above) - name each cluster with GPT-3 Could anyone please let me know what they think of these ideas? Thank you very much
0.91
t3_rlmqgh
1,640,116,608
LanguageTechnology
Summarize the "idea" of the text and estimate the relevance of the specific expression to it - is TF-IDF a winner here?
Hi guys, let's assume I have a set of documents 1-3 page(s) each, and I need to solve the following 2 problems (these are *independent* tasks): 1. **summarize (i.e. generalize) the meaning (plot)** of the text in each document 2. estimate which particular document (among all others) **fits best** for the given specific expression (not just a single word!). "Fits" here means that the lexical/logical meaning of the expression is close to that of the document. Like the "treating short-sightedness" expression is a good match to the *medical* document, but a bad one to a *business* document which describes "short-sighted decisions" Though these 2 tasks are independent in *my* case, they are interconnected as decribed in this great [article](https://towardsdatascience.com/the-best-document-similarity-algorithm-in-2020-a-beginners-guide-a01b9ef8cf05), which also tests efficiency of several algorithms in execution of what I described above as Task #2. **TLDR:** * to fulfill Task #2, firstly do Task #1 to vectorize the text * the winner is an almost 50yr (!) old TF-IDF algorithm: precise and fast as a gunshot * BERT can handle mostly non-complicated plots and is waaaaay too slow * "TF-IDF + **plenty** of data & re-learning" should be enough in most cases, but if you want *more* go for BERT and get ready to [build](https://cloud.google.com/architecture/building-real-time-embeddings-similarity-matching-system) a more serious infrastructure if you want it work fast Will be great if you guys comment on the following: 1. is splitting the text into sentences and vectorizing - the best approach to handle Task #1, or are there other, posssibly more efficient technologies? 2. have you ever done smth. similar to Task #1 and Task #2 and which technologies did you use? 3. what are the weaknesses (if you're aware of them) of TF-IDF and/or BERT to keep in mind? 4. any relevantly new and potentially perspective tehcnologies you can recommend, which may help me with either of those 2 tasks? Never late to learn, right? :) Thank you!
1
t3_rli1dj
1,640,103,424
LanguageTechnology
Spacy for keyword extraction
Does anybody know a best Spacy method for pulling out keywords and also context sentences for those keywords, from a text? Thank you
0.72
t3_rleye8
1,640,094,208
LanguageTechnology
Help with pattern matching
Hello everyone, Is it possible to match regex patterns with spacy? I need to find this structures in my texts: "Sentence1. Sentence2." and then work with this specific part. I need to tell spacy not to split sentences in quotes (in structures "..."). I wrote a regex (and this regex works) to find this patterns but know I need this regex in the pipeline. ​ My idea was: `If patternMatch == True:` `Go thru the hole matched pattern and set from every token is_sent_start = False` ​ Or does someone has another idea to tell spacy not to split inside quotes ("...")? Is it possible with spacy patterns (dictionary style {"ORTH": ' " '}, ..., {"ORTH": ' " '})?
1
t3_rlcu7k
1,640,086,528
LanguageTechnology
Looking for a NLP which can rate a text for key features like innovation and disruption
I am looking for a NLP that evaluates characterization from a text on a scale of 0-100% and gives me this. I have 1000 records with rating and wanted to ask how complicated it is to build a model for this. I have basic knowledge in Python to start. It is for a paper at my university. Is there any library that is more suitable for this and where is the best place to start. With best wishes Puzzlehead
0.7
t3_rlb7nk
1,640,079,744
LanguageTechnology
"Creative" Videos on NLP : How Computers "Learn" Languages
I was telling my friend about NLP (Natural Language Processing) earlier today and thought that maybe a Youtube video might be able to do a better job explaining what NLP is. Can anyone recommend any "creative" videos (or blogs, websites, etc.) that illustrate how computers can "learn" languages (e.g. translation, text generation, understand language) using NLP algorithms? Maybe there are some good Ted Talks, or academic university lectures that do a good at introducing/explaining NLP? I tried to look on Youtube - but I was wondering if anyone had any reccomendations? Here is what I found: \- [https://www.youtube.com/watch?v=CMrHM8a3hqw](https://www.youtube.com/watch?v=CMrHM8a3hqw) \- [https://www.youtube.com/watch?v=fOvTtapxa9c](https://www.youtube.com/watch?v=fOvTtapxa9c) \- [https://www.youtube.com/watch?v=8S3qHHUKqYk](https://www.youtube.com/watch?v=8S3qHHUKqYk) ​ Is there something that shows "semantic segmentation"? E.g. how computers learn language in "layers"? E.g. general idea of language followed by more complex forms of expressing thought? ​ Thanks!
0.6
t3_rl86qr
1,640,067,840
LanguageTechnology
The Spacy NER model for Spanish is terrible
Has anybody tried to use Spacy for NER in Spanish? I downloaded the biggest pipeline, but when implemented on some text it tends to extract full bits of sentences and label them as MISC (miscellaneous). It does correctly extract people and locations, too, but it seems weird to me that the NER model of one of the 'main' languages would be so bad. Has anybody experienced this?
1
t3_rkx0k3
1,640,033,920
LanguageTechnology
Working in the New York City area; looking for societies or groups dealing with linguistics/ML/NLP
Hi, I just became an NLP data scientist and would like to find people in real life. Would you know where I can find groups to network with NLP data scientists/engineers in real life? Thanks, Daniel
1
t3_rkuavb
1,640,026,496
LanguageTechnology
Best ML to Identify Descriptors of List of Terms?
Hello fellow NLP fanatics, I'm back with another inquiry I know some of you geniuses can help answer. In short, what's the best method, NLP, library, ML to identify the actually descriptors for a list of terms? Let's say that we want to know how people talk about a disease, the simple thing would be to start with bigrams (hate cancer, cancer sucks), but of course nothing is that simple. Take this example: "It's horrible, diagnosed at 25 I had to deal with cancer..." Now we know they refer to the cancer as horrible but it's far removed so bigrams won't work. We could leverage POS and suggest that the first adjective before or after cancer should be observed. However this could also pull in several terms not relevant to the description of the disease. Running a LDA or topic model might inform more high level discussion categories than specific descriptors. Any suggestions to optimize for this kind of research? Much appreciated, N
1
t3_rkqf3a
1,640,015,872
LanguageTechnology
Response evaluation in Virtual Assistants
Hi. Does anyone know how virtual assistant responses are monitored, either manually or through automated ways? I'm curious how inappropriate or biased responses are monitored/ prevented.
0.83
t3_rkoh19
1,640,010,368
LanguageTechnology
Meta AI Introduces A New AI Technology Called ‘Few-Shot Learner (FSL)’ To Tackle Harmful Content
For the training of AI models, a massive number of labeled data points or examples are required. Typically, the number of samples needed is tens of thousands to millions. Collection and labeling of these data can take several months. This manual collection and labeling delay the deployment of AI systems that can detect new types of harmful content over different social media platforms. To handle this issue, Meta has deployed a relatively new AI model called “Few-Shot Learner” (FSL) such that harmful contents can be detected even if enough labeled data is not available.       Meta’s new FSL deployment is a step towards developing more generalized AI models that will require very few to almost no labeled data for training. FSL falls under the category of an emerging field in AI called meta-learning, where the aim is “learning to learn” rather than “learning patterns” as done in traditional AI models. The FSL is first trained over generic natural language examples, acting as the training set. Next, the model is trained with new policy texts explaining the harmful target contents and policy-violating content that has been labeled in the past, which acts as a support set. Meta has reported that their FSL outperforms several existing state-of-the-art FSL methods by 12% on an average over various systematic evaluation schemes. For further details, one can consult Meta’s [research paper](https://arxiv.org/pdf/2104.14690.pdf?fbclid=IwAR1PrQI3y71EM5HyTHrdj5ti2hOosvIyRMKQvrRDNqk2ACgxaQnGjtYHnY4).  Quick Read: https://www.marktechpost.com/2021/12/18/meta-ai-introduces-a-new-ai-technology-called-few-shot-learner-fsl-to-tackle-harmful-content/ Paper: [https://arxiv.org/pdf/2104.14690.pdf](https://arxiv.org/pdf/2104.14690.pdf) Meta Blog: https://ai.facebook.com/blog/harmful-content-can-evolve-quickly-our-new-ai-system-adapts-to-tackle-it
0.36
t3_rjo620
1,639,885,824
LanguageTechnology
Newbie Q: why bother using anything but gpt3?
Why would u use a model offered from HuggingFace?
0.59
t3_rjbvw3
1,639,847,680
LanguageTechnology
BERT model is not learning!
I’m working on a project where I’m using Bert to classify binary stock movements using tweets. Input - tweets of last 5 days of a company Target variable - 0 (if closing - opening price <0 ) and 1 (if closing - opening price >0) What ever I seem to do the model is not learning (training, validation and test accuracy and MCC) intact the MCC is so random every time. I tried many fine tuning methods - changing learning rates, epoch, dropouts , layer wise learning rate decay, reinitialising last few layers of BERT. But nothing seems to work. Any suggestion as to why this is happening and how to improve it? I’m currently stuck at 50 percent accuracy and my target is 57 percent accuracy. Your help is greatly appreciated.
0.4
t3_rj21d1
1,639,811,712
LanguageTechnology
MLCommons Releases Both A Multilingual Speech Dataset And A Large 30,000 Hour Diverse English Dataset To Drive Democratization of Machine Learning
The [MLCommons Association](https://mlcommons.org/en/), an open engineering community, dedicated to making machine learning more accessible to everyone, has [released free datasets and technologies to help democratize machine learning](https://www.globenewswire.com/news-release/2021/12/14/2352036/0/en/MLCommons-Association-Unveils-Open-Datasets-and-Tools-to-Drive-Democratization-of-Machine-Learning.html). The People’s Speech Dataset and the Multilingual Spoken Words Corpus are the two significant new datasets (MSWC). Organizations can use these ground-breaking and openly licensed datasets to construct improved artificial intelligence models. Quick Read: https://www.marktechpost.com/2021/12/17/mlcommons-releases-both-a-multilingual-speech-dataset-and-a-large-30000-hour-diverse-english-dataset-to-drive-democratization-of-machine-learning/ People’s Speech Dataset Research: https://openreview.net/forum?id=R8CwidgJ0yT Download: https://mlcommons.org/en/peoples-speech/ Multilingual Spoken Words Corpus Research: https://openreview.net/forum?id=c20jiJ5K2H Download: https://mlcommons.org/en/multilingual-spoken-words/
0.94
t3_riyxh4
1,639,800,320
LanguageTechnology
Cosine similarity Vs Jaccard index vs TFDIF
Hello, for my Masters thesis I am researching boilerplate in corporate disclosures. Specifically I want to 1. show that similarity in annual reports has been increasing over time and 2. find the cross sectional characteristics that predict the amount of boilerplate. I will be using annual reports of 1630 Nasdaq listed firms from the years 2010-2018. I purchased the textbook " Text Mining with R A Tidy Approach " by Silge and Robinson but it did not provide an answer to which method to use. Specifically, to measure similarity I'm not sure whether it would be best to use Cosine similarity or Jaccard index. A friend of mine suggested TF-DIF but I do not see how that fits within this context. Any insights are appreciated. Also if you know of any book which would be helpful for my research please let me know. Thanks!
1
t3_riq55z
1,639,773,056
LanguageTechnology
Named entity recognition extraction from website
Can anyone recommend a most standard technique for extracting all keywords of a specific kind (i.e. of a certain category, like “species of trees”) from a whole website? Bonus points if the crawler can identify a good context sentence for that term, as well as judge if a context sentence provides/acts as a definition. Ideally, it would grab a context sentence and a definition for each term. My first attempt is going to be using Spacy for Named Entity Recognition, maybe their Prodigy software, or maybe GPT-3 for zero-shot classification. Does anyone know any pre-existing “smart” web crawling libraries which, sort of like Google Search, crawl a website for terms and find a good context sentence for that term? Thanks so much to anyone who can send me in the right direction here. Thanks very much
0.93
t3_rie2pm
1,639,734,784
LanguageTechnology
Scientific Literature Review generation v0.2
Hello everyone, I've developed recently an algorithm to automatically generate a literature review : [https://www.naimai.fr](https://www.naimai.fr/) Hopefully that could be useful for the PhDs (and the non PhDs) ! More details on the algorithm [here](https://yaassinekaddi.medium.com/scientific-literature-review-generation-386f36b05eae). I'll be thankful if you have any remarks about it :) Cheers,
1
t3_rie1wv
1,639,734,656
LanguageTechnology
OpenAI Releases A New Feature That Allows Developers To Customize GPT-3, Its Powerful Natural Language Processing (NLP) Model
GPT-3 is the advanced natural language processing model developed by OpenAI. It returns a natural language text completion in response to any text request, such as a phrase or a sentence. Developers use GPT-3 (through on-demand charging via application programming (API)) in their applications to do tasks such as text translation and software code development.  OpenAI has recently released new functionality that will allow developers to create their own versions of GPT-3. The new customization option is now available in the API. GPT-3 can execute a wide range of natural language tasks with just a few instances, a notion known as few-shot learning or prompt design. GPT-3 can be customized to produce much better results because it allows users to provide far more instances than prompt design allows. Get Access: https://beta.openai.com/docs/guides/fine-tuning/preparing-your-dataset Quick Read: [https://www.marktechpost.com/2021/12/16/openai-releases-a-new-feature-that-allows-developers-to-customize-gpt-3-its-powerful-natural-language-processing-nlp-model/](https://www.marktechpost.com/2021/12/16/openai-releases-a-new-feature-that-allows-developers-to-customize-gpt-3-its-powerful-natural-language-processing-nlp-model/) Open AI Blog: https://openai.com/blog/customized-gpt3/
0.89
t3_rhw24g
1,639,677,184
LanguageTechnology
phrase similarity
I have a bunch of phrases - not full sentences - for which I want to calculate similarities. (Typically two to six words long, for questions like whether (made up example) "senior data scientist" is more similar to "machine learning engineer" than to "project manager"). I'm looking for an off-the-shelf sort of solution - no one in my company including myself has any NLP experience, and this isn't so important to us that I want to spend weeks developing a whole new skillset for it. My impression from google is that the way to do this is to turn the phrases into vectors and take the cosine similarity of them. It looks like I could use Sentence Transformers (SBERT) with a pre-trained model to get a vector for each phrase. Or I could get a vector for each individual word from some pre-trained model (which one?) and add them up to make a phrase vector. Is there any better approach that I'm missing? Is the SBERT method the way to go for this problem?
0.86
t3_rhtvw0
1,639,670,912
LanguageTechnology
Genetic algorithm
Hello, I would like to apply genetic algorithm with natural language processing of TSP.?
0.5
t3_rhr87f
1,639,662,976
LanguageTechnology
Genetic algorithm
I wuld to apply radom generation on Genetic algorithm with TSP. can anyone suggest me your Ideas. How can I approach.
0.25
t3_rhr7k1
1,639,662,848
LanguageTechnology
What are some available tools for multilingual emotion analysis (also question about LIWC)?
As the title says. I've heard of LIWC (which you have to pay for) and NRC Emotion Lexicon. I haven't used either of them yet, but I'm mainly interested in the multilingual aspect. Are there any other tools for emotion analysis out there that are available also for languages other than English? Also, if anybody has paid to use LIWC (for academic purposes), do you automatically have access to all the languages available? Thank you!
1
t3_rhog6u
1,639,652,480
LanguageTechnology
Designing a Framework for Conversational Interfaces using PL design, API Design, and Constraint Programming
nan
0.67
t3_rhjfib
1,639,632,000
LanguageTechnology
Conferences without APC in NLP
Are there more conferences or workshops like SemEval which target NLP that self funded students can use to publish?
0.9
t3_rgy5kp
1,639,571,840
LanguageTechnology
Help with Sentence Splitting
Does anyone knows a way to add to the spacy pipeline a custom strategy to split the text into sentences? Because the splits aren't allowed to be done between citations tags. For example: Text: *"He went into the store. The store was closed."* After splitting this must be one sentence! It's not allowed to *split* between *store* and The because these two sentences are written between citation tags.
0.85
t3_rgxvdc
1,639,570,816
LanguageTechnology
A new dataset for text classification and domain adaptation in social media
A dataset of \~22,500 labeled documents across four different domains. You can find it here: [https://github.com/p-karisani/illness-dataset](https://github.com/p-karisani/illness-dataset)
1
t3_rgef94
1,639,508,096
LanguageTechnology
Free course: NLP for Semantic Search
Hi all, the first seven chapters of the course [NLP for Semantic Search](https://www.pinecone.io/learn/nlp) that I've been working on have been published today. It's all completely free and covers everything you need to get started with building SotA language models for semantic similarity, from machine translation to question-answering, and more! Semantic search allows us to search language-based data based on the semantics or 'meaning' of a text. It enables machine translation and question-answering, it's how Google understands "what time is it in NYC?", and even allows us to search for images using text-based queries. It is in essence, a way for us to interact with machines in a more human way. NLP fits in as the 'semantic' in semantic search. Current chapters are: 1. Dense Vectors 2. Sentence Embeddings and Transformers 3. Training Sentence Transformers with Softmax Loss 4. Training Sentence Transformers with MNR Loss 5. Multilingual Sentence Transformers 6. Question Answering 7. Unsupervised Training for Sentence Transformers Let me know what you think, I hope you enjoy it!
1
t3_rg8kll
1,639,491,968
LanguageTechnology
Text Data Augmentation using GPT-2 Language Model
nan
1
t3_rg4yrg
1,639,479,040
LanguageTechnology
Library that takes a pool of words and spits out sentences with only those words?
Hi, I was wondering if there exists a library that can take an array or list of words, say 500, and using either an API or a model, can generate sentences only out of those words? Open source is preferable. It would be nice if the sentences are sensible or real sentences that someone has written, but they don’t have to be, as long as theyre grammatically correct. It would be nice if there exists for other languages too - I’m trying to experiment with taking French words I know and generating sentences to test myself, without having to deal with new vocabulary words. Thanks.
0.77
t3_rg4xk3
1,639,478,912
LanguageTechnology
Has anybody tried to update the Spacy NER model?
As the title says, have you ever tried to update the Spacy NER model on your own data (as described [here](https://spacy.io/usage/training))? It seems to me that the NER feature just gets worse after retraining, and I don't understand why.
1
t3_rg325c
1,639,470,720
LanguageTechnology
Prefer volume or quality for BERT-based Text classification model
Ill train a binary classifier. Yes samples make up about 5 percent of all samples. There are multiple persons doing the labelling. They have a pairwise alpha of 0.65 Scenario A: Label each sentence once, and have every 10 sentence for all workers to check reliability. Resulting in 52000 single vote samples, plus 6000 multiple vote samples by all. Together about 3000 positive labels Scenario 2 Tripple label everything, resulting in 20000 samples, where i can majority vote, but only have 1000 positive labels. In your experience, is the better quality of samples worth the volume? Edit: Added first paragraph, which got lost by copy pasting before.
0.86
t3_rfr175
1,639,431,936
LanguageTechnology
Universal Grammar (2) | Noam Chomsky
nan
0.5
t3_rfljcc
1,639,417,856
LanguageTechnology
Are spaces generally used as tokens?
I've recently started looking into different language modeling methods and once I got to positional embeddings a whole series of questions sprung up for me. One of these is: **Do language models generally use spaces?** During courses at uni I've heard about using subwords both subwords and character level encodings with many types of language models (rnns to tree-lstms to transformers to seeding input order through probablistic (frequentist) parsers). Howevermuch I might have heard about models, I have heard much less about model **inputs**. As such I figured I could ask people who are already more preoccupid with nlp in general: is there any consensus on whether this should be done when working with subwords (either morphemes or something different like bytepair encoding)?
1
t3_rfkiun
1,639,415,296
LanguageTechnology
Looking for sentiment analysis datasets in the news domain
I am searching for multiple multi-lingual datasets for news sentiment analysis. It could be a headline or the body. Even non-English datasets would be great to look at.
1
t3_rfhh5u
1,639,407,232
LanguageTechnology
BERT vs. XLNet for texts shorter than 512 tokens.
Hi everbody, I made a binary text classification on IMDB review dataset, where average token count is around 400, I used BERT and BERT based models as well as XLNet. XLNet outperformed others (except RoBerta) and I believe the main reason is that Transformer XL's ability to capture longer dependencies than BERT. However, I am not confident about this reason because BERT can take 512 tokens at once and I wonder if this ability of Transformer XL and XLNet is valid only for texts longer than 512 tokens or can I say that independent from token count, XLNet performs better for longer texts? ​ (P.S. : I recently asked a similar comparison for twitter sentiment classification where BERT outperformed XLNet , however this question is not relevant)
1
t3_rfgl6z
1,639,404,672
LanguageTechnology
Topic modelling for labelled documents
Hi Language Tech Community, I have 2000 documents already classified into 60 categories of Topics (i.e I have **labelled data**). These topics are found using a string search in the doc and then these strings are mapped into topics. All of this is done using Excel and Alteryx.. The project I am trying is to **automatically** classify newly encountered docs into one of the 60 categories of Topics ( referenced above) AND I am trying to use Topic Modelling for this.. Because tfidf,word2vec and wordnet based approaches all have bad results.. As I understand Topic Modelling is **unsupervised** (i.e should be run on unlabellef data) **QUESTION**: Is topic modelling helpful in my project?
1
t3_rfexfv
1,639,399,552
LanguageTechnology
Apply OpenNMT-py on T-Rex Dataset
Hi, I’m a data science student and i’m learning the basics of NLP. I find OpenNMT a very intresting tool and i’m trying to understand it, after i’ve completed a couples of tutorial i’m trying to understand how to use OpenNMT for Data to text tasks. I completed webNLG challenge 2017 ([WebNLG Challenge 2017 - WebNLG Challenges](https://webnlg-challenge.loria.fr/challenge_2017/)) using OpenNMT-py and now i’m trying to follow the same pipline using the t-rex dataset([T-REx : A Large Scale Alignment of Natural Language with Knowledge Base Triples](https://hadyelsahar.github.io/t-rex/)). However i don’t understand how can i obtain the src and tgt files from the NIF or JSON file of t-rex dataset. Is there a documentation about this step? How can i use T-rex dataset with OpenNMT-py? Thanks for your attention.
0.88
t3_rfcqzk
1,639,391,232
LanguageTechnology
DeepMind's new big language model beats GPT-3 and has 280 BILLION parameters
nan
0.9
t3_rew1j6
1,639,336,704
LanguageTechnology
Is there a professional role in NLP for people who are good at foreign languages and writing?
As the title says, is there such a role, for example someone responsible for collecting, editing, or quality assuring texts, either as input data or generated content?
0.78
t3_repzdq
1,639,318,912
LanguageTechnology
Is there a simple way to split Chinese symbols into words?
Seems like `list()` will (in python) split the symbols, but AFAIK the symbols are basically subwords. Is there a way to split the symbols based on word boundaries?
0.73
t3_reo6e8
1,639,312,512
LanguageTechnology
What makes you interested in NLP?
I'm curious to know how other people have gotten into NLP. I'm a new grad, so I haven't gotten the chance to talk to too many industry veterans other than my professors and a few engineers at work. Personally, I really loved the first linguistics class I took in college, and I ended up taking many more linguistics classes after that. I was a CS major with a strong linguistics background, so NLP was the natural career path for me to take, even though I wasn't super passionate about ML more generally. At my company, almost all of the engineers actually working on NLP that I have talked to are ML and deep learning experts without a particular interest in language. One person who I had talked to said he had completed a PhD in computer vision. When I asked why he would not continue working in that domain, he said that NLP and computer vision are really all the same - just applications of deep learning. I'm curious to know other people's thoughts on this. Which sentiment is more common in your experience?
0.92
t3_red7bm
1,639,270,528
LanguageTechnology
Haystack - an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
nan
0.8
t3_rea7cf
1,639,260,928
LanguageTechnology
DataQA: the new Python app to do rules-based text annotation
nan
0.83
t3_re1whz
1,639,236,096
LanguageTechnology
How to get Job in NLP?
Hi All, I am currently working in the Embedded field mostly in drivers, Linux kernel, and RTOS. Somehow I got an interest in NLP for the past 6 months. I was going through some courses in Coursera, udemy, and some youtube tutorials. These are really helpful and I did some side projects to test my skills. I want to work in NLP as an actual paying job not only during weekends. This is a completely new field and I really don't know how to get jobs in this field. Please give some tips.
0.93
t3_rdym6v
1,639,224,960