sub
stringclasses
4 values
title
stringlengths
3
304
selftext
stringlengths
3
30k
upvote_ratio
float32
0.07
1
id
stringlengths
9
9
created_utc
float32
1.6B
1.65B
LanguageTechnology
BART: Denoising Sequence-to-Sequence Pre-training for NLG & Translation (Explained)
nan
1
t3_q40avj
1,633,707,648
LanguageTechnology
Introduction to Natural Language Processing (blog)
Towards Data Science: [https://towardsdatascience.com/introduction-to-natural-language-processing-nlp-323cc007df3d](https://towardsdatascience.com/introduction-to-natural-language-processing-nlp-323cc007df3d) KDnuggets: [https://www.kdnuggets.com/2019/10/introduction-natural-language-processing.html](https://www.kdnuggets.com/2019/10/introduction-natural-language-processing.html) ​ Feedback is welcome!
0.86
t3_q3yrby
1,633,702,912
LanguageTechnology
Objectives of NLP, NLU & NLG
I read on a blog about NLP the following: \- NLU: reads data and converts it to structured data. \- NLP: NLP converts unstructured data to structured data. \- NLG: NLG writes structured data. Isn't the NLG part false? Shouldn't it be: "Converts structured data to natural language" Source: [https://www.xenonstack.com/blog/difference-between-nlp-nlu-nlg](https://www.xenonstack.com/blog/difference-between-nlp-nlu-nlg)
0.5
t3_q3y7nm
1,633,701,248
LanguageTechnology
NLP Conferences with a decent industry track?
I just got back from RecSys2021 and was surprised in a good way by the industry presentations. Being mostly a NLP guy - but one who hasn't attended a NLP conference for years, I couldn't stop wondering if any of 'ours' have a similar focus. Are there any good conferences that mix academia with industry?
1
t3_q3xo05
1,633,699,328
LanguageTechnology
Using CLIP to get sentence/description from image
I want to use CLIP to generate a sentence by inputting an image. I've worked with a lot of implementations where the opposite is done. But I'm not very acquainted with modern text generation models. I'm guessing the principle is similar: optimise the latent vector that CLIP gives you and generate text using this latent vector, convert back into CLIP's latent space again and calculate the loss using the CLIP latent of the image. Any suggestions on which model I should use for this? Preferably one that I can run on a 3090.
1
t3_q3xmt6
1,633,699,200
LanguageTechnology
Removing whitespace between characters
Any NLP algorithm that removes extra whitespaces in between characters in a word (not in between words)? Example: "How m uc h is it?" should be interpreted as "How much is it?" instead of "Howmuchisit" codes: tokens = [lemmatizer.lemmatize(word.lower()) for word in nltk.word_tokenize(text) if word not in ignore_words] appreciate anyone's help!
0.86
t3_q3v092
1,633,688,960
LanguageTechnology
How to approach Jurafsky & Martin for learning NLP?
I'm looking to get a good overview/review of NLP in preparation for grad school. I was looking at the PhD programs I'm interested in, and quite a few of them list the Jurafsky & Martin textbook as requisite knowledge for their qualifying examinations. I've read portions of the book for classes in undergrad, but I'm not familiar with all of the topics covered, and I'd also like to review the topics I'm more familiar with. However, the book is quite long and seems tedious to read from cover to cover. If I'm more of a visual learner, do [Jurafsky's NLP lectures from Stanford](https://www.youtube.com/playlist?list=PLLssT5z_DsK8HbD2sPcUIDfQ7zmBarMYv) cover the topics from the textbook well enough? Or is there another way to approach learning from the textbook (or a better way to learn core topics in NLP altogether)?
0.92
t3_q3t9hx
1,633,680,256
LanguageTechnology
LDA model returns same words in all the topics
I'm running an LDA model with 14k unique tokens from 33k documents. The documents are questions and answers from a technical community and are rather short and focused on the same macro topic (SAP cloud Platform). I decided to extract 25 topics as I clustered the tags assigned to the original questions in groups and it seemed logical to divide them in 25 groups. I've run the model with 100 passes and 100 iterations for 7 hours but at the end I am still returned a model in which the topics are defined mostly by the same words and don't show significant differences. What could I do to improve my results?
1
t3_q3tf7k
1,633,681,024
LanguageTechnology
Looking for a table to text codebase
Hi, I am trying to implement a table to text summarizer for pharma tables. I am looking for existing codebase which can help me jumpstart the project. Any suggestions? I tried looking for them (papers that use ToTTo, WebNLG etc) but most of them do not have complete code. Thanks!
1
t3_q3ihgi
1,633,640,320
LanguageTechnology
Allennlp: What in the frig is a Predictor?
Title says it all. I know there is [a tutorial](https://guide.allennlp.org/training-and-prediction#4), and this description in [the docs](https://docs.allennlp.org/v2.7.0/api/predictors/predictor/): > a `Predictor` is a thin wrapper around an AllenNLP model that handles JSON -> JSON predictions that can be used for serving models through the web API or making predictions in bulk. But I dunno, I just don't get it. I had initially thought a `Predictor` was, intuitively, the "glue" needed on the backend to link up a `Model` and a `DatasetReader` and have them share information, but I'm able to train a model using `allennlp train` + a config without so much as (knowingly) touching a `Predictor`. This finding only heightened my confusion about what a `Predictor` is and why I should care about it. If there are any allennlp users here, can you help me understand the purpose of this component of the pipeline, and how I should use it? Thanks!
0.6
t3_q3dtez
1,633,626,880
LanguageTechnology
Just finished my first proper NLP project
Today I launched my first ever twitter bot [AAPLinsight](https://twitter.com/AAPLinsights) that focus on providing sentiment scores on $AAPL. I broke down my approaches in three categories: Apple Products, Company News and Social Media. These sentiment scores come from around 20 different sources in the web. The base model that I used was BERT and I added some additional layers to create a sentiment classiifer that specialises in financial news sentiment. Although it may be quite a simple project, I think it is quite cool and thank you for the subreddit for all the advices!
0.9
t3_q3c95t
1,633,622,528
LanguageTechnology
T-V Distinction Classifier
Hi all, ​ A bit of a shot in the dark, but I was wondering if there were any available tools to detect if a sentence in Spanish (or any language with this distinction) is using the formal or informal form of "you" through the T-V distinction? While one can make a naive baseline by explicitly checking for "tu" or "usted" in Spanish, this wouldn't capture word conjugations or the likes.
1
t3_q3b4qv
1,633,619,328
LanguageTechnology
Styleformer performance. Or anything that turns informal to formal.
Hi, everyone. I have been playing around with Styleformer today and am wondering about performance. I'm unsure if this is the right place to ask. https://github.com/PrithivirajDamodaran/Styleformer I set up a basic Flask server so it would be loaded into ram and each query takes around two seconds on my laptop. What sort of server would be required to make this decently fast? Is it something I'd use DigitalOcean for, or are there better options? Sorry if this question is far too basic. It's my first day using Python and this sort of thing. I love the output of Styleformer and would rather use it than an API. Cheers.
0.81
t3_q37xf3
1,633,609,472
LanguageTechnology
Is Debatepedia website/dataset non-existent?
Hi all, The other day, I was looking at a paper DDA (Diversity Driven Attention) Model. https://arxiv.org/abs/1704.08300 They scraped data from the Debatepedia website for the purpose of Query-Focused Abstractive Text summarization. However the links provided (in the bash script for scraping data from Debatepedia) are not accessible. I.e. I cannot access Debatepedia. https://github.com/PrekshaNema25/DiverstiyBasedAttentionMechanism Does anyone know how I can access Debatepedia? Thanks.
1
t3_q2tpel
1,633,554,944
LanguageTechnology
Best Cleaning Models or Processes
Hello everyone, Happy wonderful Wednesday! I wanted to quickly ask the community about their favorite cleaning model or process. Prior to running analysis, as we all know very well the data gathering phase will always result in a ton of noise, how do you reduce this in the quickest and most accurate fashion? \- Do you build a pipe of specific cleaning stages (dedup, irrelevant language, terms used, normalize, remove stop words, lemmatize etc.) \- Have you built a model to remove posts and clean the data? How did you trained said model? How big was your training dataset? What steps did you take to validate or verify it's quality? \- Other processes? I appreciate any and all comments, have an awesome day! All the best, N
1
t3_q2oors
1,633,539,712
LanguageTechnology
What really is perplexity, and why is it important for model evaluation?
I know that its uncertainty, but how is it any different from entropy? Super confused and fairly new to NLP, so would love any easy-to-understand explanations! Thanks!
0.88
t3_q2bzev
1,633,490,176
LanguageTechnology
Probing Language Model with WIKI-PSE: looking for implementation details
Hi all, I found this [https://github.com/yyaghoobzadeh/WIKI-PSE](https://github.com/yyaghoobzadeh/WIKI-PSE), related to the work [https://arxiv.org/pdf/1906.03608.pdf](https://arxiv.org/pdf/1906.03608.pdf), and I am looking for any implementation or details. For those who want to read or are familiar with the article: I would like to try to implement the 34 MLPs (one for each as described in section 3) but I can't figure out what the input is for each MLP. Also, wanting to probe BERT, I found this other work [https://arxiv.org/abs/2004.12198](https://arxiv.org/abs/2004.12198). But I can't figure out the implementation structure. Thanks to anyone who may be interested :D
1
t3_q29bdq
1,633,480,064
LanguageTechnology
Locate handwriting in mixed text document
Hi all! I currently have a project to OCR mixed text documents. Tesseract is fine for machine text but struggles for handwriting. I am looking for a method to only recognise sections with handwriting so it can be shipped off to a Vision API. Does anyone know any low computational methods to do this? One thought is to use the confidence output from tesseract to filter out bad segments to ship. Thanks
0.67
t3_q28u5u
1,633,478,400
LanguageTechnology
New to Python and NLP but have to work on a basic NLP project at work (classification of text into a topic)
I do know very basic python(syntax not programming concepts) but that's about it. Can someone please help me where to begin? Should I go about learning Python first , get myself a course? I really want to do good at work and hence thought I could ask here for advice.
1
t3_q23ife
1,633,461,760
LanguageTechnology
Your experience with referrals in the industry.
I'm currently ending my PhD and looking into different options for a job and I'm trying to understand the role of referrals better. Some companies pay as much as 10k for successful referrals so I'm curious about the experience you have had with referrals in the past. Have you referred friends for positions? Why yes/no? Is it weird to ask out for a referral and vice versa, have you ever asked somebody if you can refer them for a position out of the blue?
1
t3_q2108w
1,633,454,080
LanguageTechnology
Hot off the press! Exploring NLP Part 2: A New Way to Measure the Quality of Synthetic Text
nan
1
t3_q1yyld
1,633,448,192
LanguageTechnology
Free 'course' on vector similarity search and Faiss!
Hi all, I've been working with [Pinecone](https://www.pinecone.io) for the last few months on putting together a big set of articles and videos covering many of the **essentials behind vector similarity search**, and how to apply what we learn using **Faiss** (and sometimes even plain Python). Today we released the final (for now) article on HNSW. With that, I wanted to share a *'course guide'* with you all, every link below takes you to the article, and in each article, we included one or two videos too - you can read and watch in whichever order you like, but we think this makes the most sense! # Course Guide ## Part 1: Introduction 1. [Semantic Search: Measuring Meaning From Jaccard to Bert](https://www.pinecone.io/learn/semantic-search/) 2. [Getting Started with Faiss](https://www.pinecone.io/learn/faiss-tutorial/) 3. [Nearest Neighbor Indexes for Similarity Search](https://www.pinecone.io/learn/vector-indexes/) ## Part 2: Algorithm Deep Dives 4. [Traditional Locality Sensitive Hashing (LSH)](https://www.pinecone.io/learn/locality-sensitive-hashing/) 5. [Random Projection for LSH](https://www.pinecone.io/learn/locality-sensitive-hashing-random-projection/) 6. [Compression with Product Quantization](https://www.pinecone.io/learn/product-quantization/) 7. [Hierarchical Navigable Small Worlds (HNSW) Graphs](https://www.pinecone.io/learn/hnsw/) ## Part 3: More Advanced Index Concepts 8. [Filtering: The Missing WHERE Clause in Vector Search](https://www.pinecone.io/learn/vector-search-filtering/) 9. [Composite Indexes: Facebook AI and the Index Factory](https://www.pinecone.io/learn/composite-indexes/) We've written and recorded *a lot* of content, hopefully, you'll find vector search as fascinating as I do :)
0.98
t3_q1xky0
1,633,444,224
LanguageTechnology
Identifying medical information in text?
I have access to a large dataset of medical texts - notes from doctors about patients etc. - and was wondering if there is a way to take one such text and automatically create tags for it. Let's say the text describes the condition of a patient with Covid, then the algorithm would look over the text and filter out terms like "cough" "covid-19" "high temperature" etc. I guess what I am looking for is a dataset of medical terms I could use on the texts. If I want to train an ML model for this, focusing on one disease for the beginning, what would be a good amount of training data? I could tag a bunch of texts myself and just provide this as training data. Obviously, I'm pretty new to the whole field, so links to similar projects or papers would be great too.
0.83
t3_q1tmtb
1,633,429,888
LanguageTechnology
German POS Corpus for Commercial use
I'm trying to find a German corpus with POS tags that can be used for commercial purposes. I know about the TIGER corpus for which you could get a commercial license at leat in theory... however they haven't responded in months. Is there any alternative?
1
t3_q1t4ur
1,633,427,584
LanguageTechnology
Phone interview for Language Engineer job at Amazon
I have one coming up soon and have no idea how to prepare for it or what kind of questions I should expect. I tried to search reddit and only found posts about onsite interviews. If anyone could share their experience I'd be very grateful. Not sure if important, but the job is not language-specific afaik. I was told earlier that I will be interviewed by 2 people but in the most recent email, the recruiter says "interviewer" singular, so not sure anymore.
0.88
t3_q1kfk5
1,633,397,632
LanguageTechnology
Groningen Master in Voice Technology
[https://www.rug.nl/masters/voice-technology/](https://www.rug.nl/masters/voice-technology/) Hey guys, anyone doing the new MSc in Voice Technology at the University of Groningen? It sounds quite interesting and they seem to accept students from a very diverse background, different to many CL Master's. It doesn't seem to be very NLP-focused though, which might be a bummer for many people on this sub. Anyway, apparently, the degree only started this fall for the first time ever, and there's little information on the actual contents So, if anyone's doing it, I would love to know what it's like!
1
t3_q1ide0
1,633,390,720
LanguageTechnology
Small-Bench NLP: Benchmark for small single GPU trained models in Natural Language Processing
nan
1
t3_q1dain
1,633,376,512
LanguageTechnology
Question-Answering Model
Hey guys! I am a bit new to NLP and Question-Answering in general. How would one create a Question-Answering model on a very specific domain? I know that there are ways to train a given model (SimpleTransformers for example) but I was wondering what you guys would suggest for such a task.
1
t3_q19000
1,633,364,992
LanguageTechnology
Entity extraction from videos?
Hi all, I am working on a recommendation engine which suggests the most likely related video(s) for a given news article. There is little to no metadata outside a video title so the approach that I am considering is automatically transcribing the video and then performing entity extraction on the transcript, performing the same entity extraction on the article text and comparing the two. My worry is entity extraction will be impacted negatively by noisy transcription. Does anyone have any recommendations on NER from messy data or as to whether my approach to the problem of linking relevant videos to articles is with relative merit? Thanks
1
t3_q16m99
1,633,357,952
LanguageTechnology
I just released a "Youtube name generator" over the weekend by training a massive neural network
nan
0.67
t3_q157ds
1,633,353,344
LanguageTechnology
creating a dataset for summerization
I'm creating a dataset for summerization and I have crawled 100k articles and summaries from 10 news sites. obviously there are some articles that are not good for the task. for example : article is too short. what other requirements do you recommend so that I can filter out the bad ones.
1
t3_q13oyo
1,633,347,456
LanguageTechnology
NLP applications using Statistical Methods
I am a novice in NLP. I have started reading HMM approach to Part of Speech Tagging and I am enjoying this ! I could really make use of some NLP techniques that invoke statistical methods to solve interesting problems. I consider that I have a pretty solid statistical and mathematical background, so I won't shy away from possibly very 'involved' approaches. Cheers !
0.65
t3_q0cetn
1,633,243,264
LanguageTechnology
Teach Computers to Understand Videos and Text without Labeled Data - VideoClip
nan
0.78
t3_q059gv
1,633,214,976
LanguageTechnology
Question about scraping unstructured texts using BERT
Hello, First of all, I'm a data analyst with some data engineering background as well. I never really studied/worked with ML models... I am working on a project where I need to extract data from unstructured texts (PDF documents with multiple pages each). I assume it's possible to find the data I'm looking for in the texts. Since I know nothing about the text, and since it is unstructured, I looked into using BERT, pre-trained on the CoQA dataset to answer questions, based on: [https://towardsdatascience.com/question-answering-with-a-fine-tuned-bert-bc4dafd45626](https://towardsdatascience.com/question-answering-with-a-fine-tuned-bert-bc4dafd45626). I get good results from this pre-trained model if I manually locate the paragraph that contains the answer to the question, and let the model predict the answer with that paragraph as input. However, since I don't know in which paragraph the answer is hiding, this is clearly not helping me much. Some ideas I've tried: * Splitting the text into paragraphs and asking the model to predict an answer for the same question for each paragraph. I assume I'll get the right answer, but I won't know which one it is.... So not really helpful. (I might be able to ask the model to predict again on the outputs from the  previous step, seems a bit messy but I'll try). * Extracting a list of headers from the text (meaning the title of each paragraph), and asking the model to predict which header's paragraph might contain the answer my question. This method works in some cases, but certainly not good enough. Is there an elegant method you are familiar with? I'm sure I'm not the first person to try scraping large documents with BERT. Any inputs or ideas are welcome. Thanks!
1
t3_q031pb
1,633,207,424
LanguageTechnology
Suggestions on Cool NLP Projects!
Hi all, receiving suggestions on any NLP projects that you may find cool in 2021! Currently brainstorming for an upcoming group project for school. It's an open-ended project where we have to build NLP models. When browsing past student's project choices I realised many of the projects were repetitive (e.g. hate speech detection, sentiment analysis, predicting stock prices). Would love to see if the community has any fresh ideas! Here are some interesting topics that I've noted down but would love to have more for me to think about. It could be anything, with existing papers or not. * Detecting personality based on social media * Automated essay scoring * Resume scoring/analysis **EDIT:** Thank you everyone for your contributions! Know that I'm looking into each and every one of them. You guys are awesome.
0.9
t3_pzvtyh
1,633,183,744
LanguageTechnology
Braifun-nlp: A free Natural Language Processing tool to help Researchers brain storm their ideas (Alpha release)
nan
0.75
t3_pzrlwl
1,633,165,056
LanguageTechnology
Roberta Tokenizer Query
I use roberta-base tokenizer tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base',add_prefix_space=True) trained on english data to tokenize bengali just to see how it behaves . When I try to to encode a bengali character tokenizer.encode('বা') , I get [0, 1437, 35861, 11582, 35861, 4726, 2] which means that it finds some tokens in it vocabulary which match bengali characters even though train on english. On further exploration I find these are all special characters ['<s>', 'Ġ', 'à¦', '¬', 'à¦', '¾', '</s>'] . My question is why does it happen, isn't it supposed to output unknown tokens when applied on a new language ? Any help greatly appreciated
1
t3_pzr72v
1,633,162,880
LanguageTechnology
Microsoft AI Unveils ‘TrOCR’, An End-To-End Transformer-Based OCR Model For Text Recognition With Pre-Trained Models
The problem of text recognition is a long-standing issue in document digitalization. Many current approaches for text recognition are usually built on top of existing convolutional neural network (CNN) models for image understanding and recurrent neural network (RNN) for char-level text generation. There are some latest progress records in text recognition by taking advantage of transformers, but this still needs the CNN as the backbone. Despite various successes by the current hybrid encoder/decoder methods, there is definitely some room to improve with pre-trained CV and NLP models. Microsoft research team unveils ‘[TrOCR](https://arxiv.org/pdf/2109.10282.pdf),’ an end-to-end Transformer-based OCR model for text recognition with pre-trained computer vision (CV) and natural language processing (NLP) models. It is a simple and effective model which is that does not use CNN as the backbone. TrOCR starts with resizing the input text image into 384 × 384, and then the image is split into a sequence of 16 × 16 patches used as the input to image Transformers. The research team used standard transformer architecture with the self-attention mechanism on both encoder and decoder parts where word piece units are generated as recognized text from an input image. # [4 Min Read](https://www.marktechpost.com/2021/10/02/microsoft-ai-unveils-trocr-an-end-to-end-transformer-based-ocr-model-for-text-recognition-with-pre-trained-models/)| [Paper](https://arxiv.org/pdf/2109.10282.pdf) | [Github](https://github.com/microsoft/unilm/tree/master/trocr)
0.92
t3_pzqqq9
1,633,160,448
LanguageTechnology
Text Classification - Sentiment Classifier without Training Data - Hugging Face NLP
nan
0.5
t3_pzgnu3
1,633,120,896
LanguageTechnology
How to get access to Wu Dao?
Is there any way to get access to the Chinese language model from BAAI? Or is it proprietary?
0.86
t3_pzdb72
1,633,110,784
LanguageTechnology
Get list of authors for topic in gensim atmodel
In gensim atmodel **get\_author\_topics(***author\_name)* returns the topic distribution for the selected author. Is there any method that given a topic, returns a list of the most probable authors?
1
t3_pza2yi
1,633,101,184
LanguageTechnology
Training GPT-2 with HuggingFace Transformers to sound like a certain author
I'm training a GPT-2 model (transfer learning from a pre-trained model) on "The Complete Works of HP Lovecraft", and my goal is to fine tune it to look for certain relationships between words, eventually training it to use the same words and similar relationships to the original stories. The training goal would be this: let's say I break down the call of cthulhu, pt. 1: the horror in clay into what the primary subject is, who the characters are, what actions they performed, and what order they performed the actions in; I'd like for the trained model to match those milestones. what I'm *not* saying is that the story would match the original; rather, the syntax of the story would be the same. does this make sense? Is gpt-2 with huggingface transformers the best way to approach this, or is there some other library I could use? Thanks.
0.88
t3_pz9uy1
1,633,100,544
LanguageTechnology
Please suggest some papers describing advantages of neural MT over statistical MT
I've seen people write about these in empirical manner - [https://www.tilde.com/about/news/316](https://www.tilde.com/about/news/316) as well as Philipp Koehn's textbooks on NLP. Are there some good research papers that summarize these findings and/or talk of this in a theoretical manner - as to what makes neural MT better than SMT? Thanks!
0.5
t3_pz3iwe
1,633,076,096
LanguageTechnology
Download Wikipedia Text Dump?
Does anyone know of any script which can be used to pull Wikipedia text data (preferrably the XML dump) for processing?
0.5
t3_pz1be3
1,633,065,728
LanguageTechnology
[P]AI Biomedical Writer
nan
0.67
t3_pyxtes
1,633,052,416
LanguageTechnology
Automated conversion of NL into formal logic.
Hi. I'm wondering if anyone is familiar with any work/code that deals with translating natural language into formal logic in particular modal logic/epistemic logic. Thank you!
1
t3_pyw8f3
1,633,046,912
LanguageTechnology
Transformer NLP model similar to GPT-2 345M with nice up-to-date code base and multi-GPU training support?
I am working on an interactive poetry project and I am searching for a model that would be easy to work with. I have worked on a previous project that involved a pre-trained version of the 345M GPT-2 model. It delivered great results for our use case. Larger models also worked great, but we opted for this smaller version since we had very limited compute available for inference — this was a personally-funded web-based application, and server time got expensive very quickly. I am working on a new project that both gives us the resources to train and fine-tune that model with our chosen datasets (cloud GPUs got really good and inexpensive in recent years!). We need to train it both in French and English. The datasets we have aren’t huge, they have respectively about 60,000 and 8,000 literary pieces, so using a gigantic model wouldn’t really be beneficial. We don’t have as much of a restriction on inference compute here, as long as it can run fine on a decent CPU at a few words per second. My initial thought was to simply train the same model, but the code base is somewhat old (not compatible past TensorFlow 1.15), which seems to cause issues with newer Ampere GPUs. It also doesn’t support multi-GPU training. I know there is a TensorFlow 2.0 fork, and I know I could spend a bit of time getting multi-GPU working by splitting batches, but time is short, and I figure there must have been a lot of NLP code written since then. So my question is: is there a nice, roughly similarly sized NLP model with a modern codebase you’d recommend for this?
1
t3_pyuhmk
1,633,040,896
LanguageTechnology
How to customize UI
Hello, im planning on creating a naural language processing ui to help me with homework, find information on the web, make calculations, and more. My only question is, can how can I make the voice the UI responds in unique, and not the same as the first siri, or whatever default voice it uses.
1
t3_pysipu
1,633,034,752
LanguageTechnology
Data Analyst seeking to learn Text Analytics
Hi Everyone, I used an off-the-shelf text and sentiment analysis tool in a previous job. I am an Analyst with SQL and Python (for Data Analysis) skills. I enjoyed text analysis and would like to apply it for use cases in my current job. \--Can you please advise if there are any free tools I may use. It looks like there are none! \--What should I learn in order to be able to use Python for text analysis. Thanks so much for your time!
0.92
t3_pynvzm
1,633,020,928
LanguageTechnology
A New NLP book for Transformers!
The Book Mastering Transformers is out! Our new book Mastering Transformers has been published. In this book, we discuss the transformers revolution: Not only the introductory topics and the key aspects regarding Transformers, but also advanced topics. You can build state-of-the-art models from scratch with advanced natural language processing techniques I'm the co-author :) [https://www.amazon.com/Mastering-Transformers-state-art-processing/dp/1801077657](https://www.amazon.com/Mastering-Transformers-state-art-processing/dp/1801077657)
0.87
t3_pygz96
1,632,997,632
LanguageTechnology
Difference b/w Elasticsearch and Retriever
I'm in the process of documenting a build of an extractive QA pipeline using haystack and elasticsearch. From my understanding, we first take the corpus and store the documents/contexts from the corpus into a sparse (ie. elasticsearchdocumentstore) or a dense documentstore (ie. FAISS). Once encoded, the retriever (ie. sparse or dense passage retriever) will perform a similarity search to identity top-n of most relevant documents. The reader will then predict where in each context the answer is located. I'm confused where elasticsearch comes into the picture. I read that elasticsearch is the back-end search engine but isn't the retriever doing the actual searching/similarity calculations.
1
t3_pychbp
1,632,976,512
LanguageTechnology
Fine-tuning pre-trained word vectors to explore word "meaning"
Hi everyone! !Disclaimer: I am a beginner in NLP who finds word embeddings very fascinating! Say I am interested in the "meaning" of some word *w* within a certain corpus. I'd like to explore that meaning by training word embeddings and looking at the nearest neighbors of *w* in the vector space. (1) First of all, would it make sense to do that? Say, then, that I would like to see the "meaning" of *w* in a more general context. (2) Would it also make sense to fine-tune pre-trained word vectors on my corpus? I am wondering if then the meaning of *w* would shift towards something else, which could be an indication that the usage of *w* was sort of biased/different in my corpus. (3) If all of the above is valid to explore, and it makes sense, could anyone point me to readings/resources to fine-tune pre-trained word vectors? I find plenty of explanations, papers and courses on what word embeddings are and how to generate them, but I can't easily find stuff for fine-tuning. Thanks in advance :)
1
t3_py4zts
1,632,950,400
LanguageTechnology
Baidu AI Research Releases PLATO-XL: World’s First Dialogue Generation (NLP) Model Pre-Trained On 11 Billion Parameter
Artificial intelligence (AI) applications have a significant impact on our daily lives, making them easier. One of such applications is AI bots that are already proven effective in the automation of day-to-day tasks. These bots gather data and even imitate real-time human discussions, allowing humans to focus on more strategic activities. However, having clear, informative, and engaging conversations in the same manner that humans do is difficult for AI bots. Robots must build high-quality open-domain dialogue systems if they are to serve as emotional companions or intelligent assistants. As pre-training technology improves models’ ability to learn from vast amounts of unannotated data, mainstream research concentrates on making better use of massive data to improve open-domain discussion systems. # [4 Min Read](https://www.marktechpost.com/2021/09/29/baidu-ai-research-releases-plato-xl-worlds-first-dialogue-generation-nlp-model-pre-trained-on-11-billion-parameter/) | [Paper](https://arxiv.org/abs/2109.09519) | [BAIDU Blog](http://research.baidu.com/Blog/index-view?id=163)
1
t3_py23t7
1,632,941,824
LanguageTechnology
Release John Snow Labs Spark-NLP 3.3.0: New ALBERT, XLNet, RoBERTa, XLM-RoBERTa, and Longformer for Token Classification, 50x times faster to save models, new ways to discover pretrained models and pipelines, new state-of-the-art models, and lots more!
nan
1
t3_pxxkyd
1,632,928,768
LanguageTechnology
Looking for best way to do embedding search in production
Hi all, I came across with one problem of finding similar documents in a set of huge corpus. Looking for your help to figure out best possible solution. What I am looking for is, given a new document I want to retrieve similar documents based on the semantic similarities from a collection of documents (millions, billions in number) Currently I am looking at the pre-computation of all the documents in corpus and store it somehow(maybe elastic search). Now whenever a new document comes, calculate embedding and find similar documents (with some threshold). Now since documents are huge in number and for every new document I have to calculate similarity with all documents which is way too time taking. So looking for a way to reduce complexity and latency. (Results should be achieved in less than a second) Help me, if you guys know anything similar or how should I proceed with such problem.
1
t3_pxro46
1,632,907,136
LanguageTechnology
Google AI Introduces Translatotron 2 For Robust Direct Speech-To-Speech Translation
The Natural Language Processing (NLP) domain is experiencing remarkable growth in many areas, including search engines, machine translation, chatbots, home assistants and many more. One such application of S2ST (speech-to-speech translation) is breaking language barriers globally by allowing speakers of different languages to communicate. It is therefore extremely valuable to humanity in terms of science and cross-cultural exchange.  Automatic S2ST systems are typically made up of a series of subsystems for speech recognition, machine translation, and speech synthesis. However, such cascade systems may experience longer latency, information loss (particularly paralinguistic and non-linguistic information), and compounding errors between subsystems. Google’s recent study presents the improved version of Translatotron, which significantly enhances performance. [Translatotron 2](https://arxiv.org/abs/2107.08661) employs a new way for transferring the voices of the source speakers to the translated speech. Even when the input speech involves numerous speakers speaking in turn, the updated technique to voice transference is successful while also decreasing the potential for misuse and better complying with our AI Principles.  # [5 Min Read](https://www.marktechpost.com/2021/09/28/google-a-introduces-translatotron-2-for-robust-direct-speech-to-speech-translation/) | [Paper](https://arxiv.org/abs/2107.08661) | [Google AI Blog](https://ai.googleblog.com/2021/09/high-quality-robust-and-responsible.html)
1
t3_pxn8zo
1,632,887,424
LanguageTechnology
Loss stuck. Model for speech-to-text system
I’m trying to build a speech-to-text system my data is (4 - 10 seconds audio wave files) and their transcription (preprocessing steps are char-level encoding to transcription and extract mel-Spectrograms from audio files). this is my model architecture is ( a 3 conv1d layers with positional encoding to the audio file - embedding and positional encoding to encoded transcription and then use those as input to transformer model and lastly a dense layer) the loss function is cross entropy and optimizer is Adam. the problem is that the loss is always stuck at some point it starts around 3.8 (I have 46 classes) and after some batches it decreases to (e.g. 2,8) and stuck their. it bounces around that value and never decrease again. I tried changing parameters of the model, I’ve changed the optimizer and learning rate always result the same problem. I don’t understand what I’m doing wrong [Training Loss](https://i.stack.imgur.com/1q8Jc.png)
0.78
t3_pxbkw1
1,632,850,816
LanguageTechnology
Using NLP to parse and analyse cooking recipes.
Hey everyone, I'm a intermediate programmer with an interest but no experience in Natural Language Processing and I was hoping to get some guidance. I'm trying to write a command-line program that takes plain text files of recipes and returns an analysis of potential typos in weight, volume, temperature, time, etc. For example, if a given recipe says to bake for 45 seconds instead of minutes. I should also be able to query the recipe for things like "well-cookedness" where (given the previous example), the program would identify that the recipe produces 'uncooked' or 'undercooked' results. I was hoping to do all of the work in Python and I read that Python's default NLP library, the Natural Language Toolkit (NLTK) would be a good place to start. I am ready to learn everything as I go along but I'm hoping for guidance on the overall process of implementing such a project. Please forgive me if the following questions sound stupid 😅: * Is there an NLP library I should use instead of or in addition to Python's NLTK? * What recommended AI or NLP techniques should I research and implement for a program like this? * What would be the main stages of this program? From text analysis straight to querying data or are there some intermediate steps? Thank you for reading up to this point and for any advice!
0.77
t3_px9ifn
1,632,844,928
LanguageTechnology
OpenAI’s New Machine Learning Model Can Summarize Any Size Book with Human Feedback
OpenAI has developed a[ new model to study the alignment problem of machine learning](https://arxiv.org/pdf/2109.10862.pdf). This model can summarize books of any length by creating summaries of each chapter. Yes, you heard it right; OpenAI’s new machine learning model can summarize the entire book. The proposed machine learning model summarizes a small part of the book and then summarizes these summaries to obtain a higher-level overview. This research has been done as an empirical study on scaling correspondence problems which is usually tricky for AI algorithms because they require complex input text or numbers that have not yet been trained. # [3 Min Read](https://www.marktechpost.com/2021/09/27/openais-new-machine-learning-model-can-summarize-any-size-book-with-human-feedback/) | [Paper](https://arxiv.org/pdf/2109.10862.pdf) | [OpenAI Blog](https://openai.com/blog/summarizing-books/)
0.92
t3_pwvj3s
1,632,792,576
LanguageTechnology
PLATO-XL: Exploring the Large-scale Pre-training of Dialogue Generation
Abstract: To explore the limit of dialogue generation pre-training, we present the models of PLATO-XL with up to 11 billion parameters, trained on both Chinese and English social media conversations. To train such large models, we adopt the architecture of a unified transformer with high computation and parameter efficiency. In addition, we carry out multi-party aware pre-training to better distinguish the characteristic information in social media conversations. With such designs, PLATO-XL successfully achieves superior performances as compared to other approaches in both Chinese and English chitchat. We further explore the capacity of PLATO-XL on other conversational tasks, such as knowledge grounded dialogue and task-oriented conversation. The experimental results indicate that PLATO-XL obtains state-of-the-art results across multiple conversational tasks, verifying its potential as a foundation model of conversational AI. Paper link: [https://arxiv.org/abs/2109.09519](https://arxiv.org/abs/2109.09519)
1
t3_pwovgu
1,632,771,840
LanguageTechnology
BERT fine-tuning techniques
Hello everyone, I am currently in the process of fine-tuning BERT for a classification problem using a small dataset. I came across this article stepping through a tutorial on how to do so. https://www.analyticsvidhya.com/blog/2020/07/transfer-learning-for-nlp-fine-tuning-bert-for-text-classification/ One area I was curious about in the article was the brief discussion in techniques. They discussed training the entire architecture, freeze some layers or freeze the entire architecture. Can anyone here help point me in a direction to learn more about each technique? More specifically, what the pros and cons? When to apply them in practice? And are these the only ones? Thank you!
1
t3_pwj8bi
1,632,755,712
LanguageTechnology
STS-B Glue
Hi guys has anyone used STS-B before (it’s one of the glue benchmark tests). I’m not really sure how to evaluate my model. The gold labels are human scores between 0-5 and correspond to how similar two sentences are. I have a model which returns vector representations of two sentences. I then compute the cosine similarity and scale the result to be between 0 and 5 by doing ((res+1)/2)*5 but that just seem wrong. Does anyone have any experience with this? Any pointers would be greatly appreciated!
1
t3_pwijwb
1,632,753,792
LanguageTechnology
Classify short sentences into 6 different classes using BERT pretrained model
How can I train the bert pretrained model with a custom dataset that I have in the .xlsx format? The training data has 2 columns, an input column and a class column.
0.78
t3_pwc0ao
1,632,726,912
LanguageTechnology
[R] Compressing Large-Scale Transformer-Based Models: A Case Study on BERT
Hi all, We have released a survey on current SOTA in BERT model compression. We do a thorough study of various components of BERT-like Transformer models, collect various compression methods in literature and finally provide our insights on future research directions. The paper was recently **published by TACL.** You can find the paper at -> [https://direct.mit.edu/tacl/article/doi/10.1162/tacl\_a\_00413/107387/Compressing-Large-Scale-Transformer-Based-Models-A](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00413/107387/Compressing-Large-Scale-Transformer-Based-Models-A) Hopefully, this can help new NLP researchers get a better understanding of the field. We welcome your feedback.
0.94
t3_pw9c7a
1,632,715,520
LanguageTechnology
Best open source solution to automatically shorten product titles to 60 characters or fewer
Hi everyone! I need an open source solution that could help me automatically shorten tens of thousands of product titles from 100-200 characters to 60 characters or fewer. Is such a miraculous solution available to the poor and uneducated like myself (or even to others more fortunate)? Thanks a lot!
1
t3_pw6plw
1,632,705,536
LanguageTechnology
Struggling with understanding pytorch model code. I need to train this model, but I literally don’t understand how it works (haven’t worked with pytorch previously). Any tips or resources I can get on where I could start from the basics?
Title. I’ve just gotten a new position and my group threw a bunch of code (like, literally) and told me to train this model. Have no idea where to start. I’d like to start from the basics and learn more on pytorch training. First time doing this sort of work, used TF and keras before. Any resources on where I should start? Btw: people I am working with are PhDs and adults who have a ton of experience in NLP, and I’m a high schooler who has to learn all of this by myself in the next 3 days.
1
t3_pw5k9l
1,632,701,312
LanguageTechnology
Gothenburg vs Uppsala Masters
What’re the reputations like for these programs? I’m currently doing my undergrad in Linguistics and a minor in CS. I’m a junior so I’m trying to figure out some good options. I know Edinburg is good, what about some other schools?
0.81
t3_pw2z4x
1,632,692,352
LanguageTechnology
Need a mentor for his/her guidance in my NLP project
Hi community! I am in search for a mentor who can guide me on how to approach for a project I want to build. My project is aimed to build a NLP model which can take information about a certain topic/query from various sources and summarises the text in a more understandable manner. The key task is the model takes a query from user and uses Google's search results to extract text from the webpages, understand the semantics and provide a more summarised and understandable output for the searched topic. As I am new to this there might be some assumptions I am making wrong or arbitrary. I don't know how should I approach this problem neither I have worked upon an NLP project before but I can learn and work for it. If anyone can mentor me for this, it'll be great. Thanks in advance!
0.67
t3_pvwdxw
1,632,671,232
LanguageTechnology
Training or fine-tuning transformers on weighted sample data
Hi there, I am wondering if there is a way to use weights (e.g. upvotes/downvotes) into the fine-tuning of GPT-2 or a different NLP algorithm. In other words, the higher the human rating given to a sample in the corpus, the more influence it should have on the fine-tuned model. I apologize in advance if this is very basic functionality that I'm just not aware of!
0.81
t3_pvvyjc
1,632,669,824
LanguageTechnology
Indox - text summarization engine
Hi all! I’ve developed a cutting-edge summarization engine and want to start a company that will provide AI services to customers. I dropped an article on medium [https://medium.com/@OlexanderKorenyuk/indox-summarizaton-engine-b2fc49864ddf](https://medium.com/@OlexanderKorenyuk/indox-summarizaton-engine-b2fc49864ddf) . If you like, please, look at it, demo area on a website will be very appreciated for a feedback Thanks!
0.78
t3_pvvor5
1,632,668,928
LanguageTechnology
[Hiring] Looking for data scientists with NLP experience in USA
Hi all, My team is currently looking for data scientists with NLP experience. The role could potentially be remote from anywhere in the USA. Although the role would involve the usual data science suspects like EDA and ad hoc analysis, there would be a heavy NLP element to the role including custom NER modeling. Ideal candidate - have industrial data science experience and comfort with messy data. If anyone is interested, pls reach out to me.
0.76
t3_pvlf10
1,632,625,024
LanguageTechnology
Would you say that creating data for relation extraction (RE) is "harder" than creating data for named entity recognition (NER)?
Title is the question. Creating labeled data is expensive for _any_ subtask of machine learning, but I'm focused particularly on the two information extraction subtasks of RE and NER. I'm wondering if it's legitimate to say that "creating data for RE is harder than that for NER" since, well, I don't really have any concrete way to prove the difficulty. I came to wonder this because NER is largely seen by many as a task that's achieved a lot of progress and SoTA NER tools can be used out of the box without any horrendous error cases. Therefore it seems that creating silver standard data for NER is fairly simple (i.e., just use these tools or a SoTA neural model on unlabeled text), but for RE we have to go an extra step. What I mean is that we have to perform NER and then additionally annotate the relation between two entities. Could you say that creating data for RE is more difficult in this regard? Also, is there any research work out there that touches upon this subject? Thanks!
1
t3_pveedg
1,632,600,064
LanguageTechnology
How will machines understand people? That's how! The Folks’Talks understanding test.
[https://youtube.com/watch?v=mlJakDX\_93g&feature=share](https://youtube.com/watch?v=mlJakDX_93g&feature=share)
0.5
t3_puxtcr
1,632,536,960
LanguageTechnology
Facebook AI Unveils Dynatask, A New Paradigm For Benchmarking AI, Enabling Custom NLP Tasks For AI Community
Last year, Facebook AI launched [Dynabench ](https://ai.facebook.com/blog/dynabench-rethinking-ai-benchmarking/)as a first-of-its-kind platform that rethinks benchmarking in artificial intelligence. Now, they are introducing ‘Dynatask’, a new feature unlocking Dynabench’s full capabilities for the AI community. [Dynatask](https://ai.facebook.com/blog/dynatask-a-new-paradigm-of-ai-benchmarking-is-now-available-for-the-ai-community/) helps researchers identify weaknesses in NLP models by having human annotators interact with them naturally. Dynatask has developed a new artificial intelligence model benchmarking system that is more accurate and fair than traditional methods. Researchers will be able to utilize the strong capabilities of the Dynatask platform and can compare models on the dynamic leaderboard. This is not limited to just accuracy but includes a measurement approach of fairness, robustness, compute, and memory. When Dynabench was launched, it had four tasks: natural language inference, question answering, sentiment analysis, and hate speech detection. The Facebook AI research team has powered the multilingual translation challenge at Workshop for Machine Translations with its latest advances. Cumulatively these dynamic data collection efforts resulted in eight published papers and over 400K raw examples. # [5 Min Read](https://www.marktechpost.com/2021/09/24/facebook-ai-unveils-dynatask-a-new-paradigm-for-benchmarking-ai-enabling-custom-nlp-tasks-for-ai-community/) | [Facebook Blog](https://ai.facebook.com/blog/dynatask-a-new-paradigm-of-ai-benchmarking-is-now-available-for-the-ai-community/) ​ https://reddit.com/link/puv5nd/video/m3lfbzn9ejp71/player
0.76
t3_puv5nd
1,632,526,592
LanguageTechnology
We are now publishing some downloadable NLP datasets from reddit posts and comments. First subreddits covered are /r/wallstreetbets (25K posts and 1 million comments) and /r/NoNewNormal (120k posts 2.5 million comments) for Aug 2021
nan
0.98
t3_putyjx
1,632,522,240
LanguageTechnology
A Guide to Building Your First NLP Application to Detect SPAM
nan
0.72
t3_puo2hu
1,632,503,040
LanguageTechnology
Zero or Few Shot NER on Custom Entity
Hey ya'll, I'm try to get a baseline for how good a zero or few shot approach would be on recognizing a custom entity (in this case job titles in german). I've been skimming through a few papers and see that it's certainly possible to do this, but I haven't seen any out-of-box type code that I could use to get a baseline on how effective it'll be. Anyone have any thought or ideas on how to approach this?
1
t3_pumxao
1,632,499,712
LanguageTechnology
FAISS and the Index Factory - an intro to composite indexes for similarity search
Hi all - I put together [an article and videos](https://www.pinecone.io/learn/composite-indexes/) covering the composite indexes for vector similarity search and how we can implement them in Faiss. I've done a lot of articles/videos on faiss + vector similarity search recently and I think this has to be the most useful for building good indexes imo! I hope some of you find it useful, and let me know what you think/if you have questions!
0.93
t3_pujbzg
1,632,488,704
LanguageTechnology
UBIAI
Today, text annotation tools are one of the most prominent parts of machine learning. Research areas such as search engines, chatbots, sentiment analysis, and virtual assistants require text annotation tools for better training of machine learning models. The machine learning industry and AI research require a large amount of annotated data. High-quality annotated data is like a goldmine for them. However, finding and creating this enormous amount of annotated data can be an arduous task, and most of the time, expensive. Fortunately, text annotation tools can help annotate this enormous amount of data in a matter of time. These annotation tools help with named entity recognition annotation, entity extraction, sentiment analysis, relation annotation, document classification, and more. Find out more here: [https://ubiai.tools/](https://ubiai.tools/)
0.33
t3_pugio5
1,632,477,312
LanguageTechnology
Fine-tuning BERT models, alternatives for the last layers?
I'm relatively new to the field of NLP, so excuse me if this is a trivial question. I'm fine-tuning a BERT model to do sentiment analysis, I have already succeeded. However, I find interesting that all tutorials and notebooks I found use the same layers after the BERT encoder, namely a dropout (sometimes) and a dense layer with the appropriate size for the task. It is common to use different architectures for the layers after the encoder, for example, two (or more) dense layers, etc. Thanks for any insight.
0.93
t3_pu66z3
1,632,435,712
LanguageTechnology
Currently looking for a research internship for my masters thesis
I'm currently writing a letter to companies who will hopefully take me on and give me a project to work on. The problem is that I have no idea what I'm interested in because I'm interested in most things to do with NLP / machine learning. I feel like I should just say "something something transformers, algorithms". I feel like it's hard to be specific when I'm asking them to give me a project? Does anyone else have this issue?
1
t3_pty2mo
1,632,412,032
LanguageTechnology
Summarizing multiple documents into one summary
I have found lots of info on summarizing single documents. But what I am looking for is being able to take multiple documents on the same subject and generate one summary that encompasses several different source documents. The next level of this for me would be to highlight the outlier info in the different documents. Has this been done? Maybe I am searching using the wrong terms to find the info... Any help is appreciated
1
t3_ptv48z
1,632,403,200
LanguageTechnology
Fine-tuning GPT-J: key takeaways
Hello all, We've spent quite some time benchmarking the best fine-tuning techniques for GPT-J at [NLP Cloud](https://nlpcloud.io?utm_source=reddit&utm_campaign=j431103c-ed8e-11eb-ba80-2242ac130007). Finding the best solution was not straightforward and we had to look at things like speed, server costs, ease of development, accuracy of the fine-tuned model... It took time but we ended up with a nice setup (and we are now officially proposing GPT-J fine-tuning + automatic deployment on our platform). Here are our key takeaways: * The best methodology seems to be the one from the Mesh Transformer Jax team: [https://github.com/kingoflolz/mesh-transformer-jax/blob/master/howto\_finetune.md](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/howto_finetune.md) * Fine-tuning on GPU is not ideal. Even several GPUs used in parallel with Deepspeed can be very slow. We used 4 GPUs Tesla T4 in parallel, and it took 1h30 to only compute our first checkpoint (+ 80GB of RAM used...), for a training dataset made up of 20k examples. Maybe a GPU A100 would be worth a try. * Fine-tuning on TPU is very efficient but it takes a TPU v3 because TPUs v2 are running out of memory. It takes around 15mns, for a training dataset made up of 20k examples, which is really awesome. * The overall process is not straightforward as it takes several kind of conversions (converting the datasets to the right format, making a slim version of the model, converting the weights to Transformers...) In the end this is worth the effort, because combining fine-tuning and few-shot learning makes GPT-J very impressive and suited for all sorts of use cases. If you guys have different feedbacks about GPT-J fine-tuning, please don't hesitate to comment, I would love to have your opinion. Hope you found the above useful!
0.97
t3_pttzvk
1,632,399,616
LanguageTechnology
Concatenate to LSTM models
I'm fairly new to NLP and building a model that takes two sub-models and concatenates them. The dataset has two text input columns and the predictor variable has 3 classes. Below is the code I wrote: model1 = Sequential() model1.add(Embedding(MAX_NB_WORDS,EMBEDDING_DIM,input_length=X1.shape[1])) model1.add(SpatialDropout1D(0.2)) model1.add(LSTM(100,dropout=0.2,recurrent_dropout=0.2)) # Shape <KerasTensor: shape=(None, 100) dtype=float32 (created by layer 'lstm_3')> model2 = Sequential() model2.add(Embedding(MAX_NB_WORDS,EMBEDDING_DIM,input_length=X2.shape[1])) model2.add(SpatialDropout1D(0.2)) model2.add(LSTM(100,dropout=0.2,recurrent_dropout=0.2)) # Shape <KerasTensor: shape=(None, 100) dtype=float32 (created by layer 'lstm_4')> concat_layer = Concatenate()([model1.output, model2.output]) dense_layer = Dense(10, activation='relu')(concat_layer) output = Dense(3, activation='softmax')(dense_layer) input_1 = Input(shape=(MAX_LEN,)) input_2 = Input(shape=(MAX_LEN,)) # I have set Max_LEN=250 # Both input_1 and input_2 are of shape TensorShape([None, 250]) model = Model(inputs=[input_1, input_2], outputs=output) # When I run the model I get the below error: ValueError: Graph disconnected: cannot obtain value for tensor KerasTensor(type_spec=TensorSpec(shape=(None, 250), dtype=tf.float32, name='embedding_3_input'), name='embedding_3_input', description="created by layer 'embedding_3_input'") at layer "embedding_3". The following previous layers were accessed without issue: [] What mistake am I making?
0.69
t3_ptgfvf
1,632,345,088
LanguageTechnology
Asking for Some Help Regarding a System to Help Facilitate Communication between a Deaf/Hard of Hearing Professor and Students in a Classroom Environment
nan
0.67
t3_ptbqlk
1,632,329,600
LanguageTechnology
Interpret 3d/2d shape from its text description
I want to make a model that takes a text input such as "Make a round ball and a pyramid for me please" and gives an output "sphere and cone" since they are the 3d shapes that are refereed to in the sentence. Any idea I can achieve something like this? Any links that can help me with this task?
0.81
t3_ptc0g4
1,632,330,368
LanguageTechnology
Pre processing text
I am trying to clean some text from html tags however I cannot manage to remove new lines and slashes. What am I missing? raw text: 'Is there an easy way to get a list of my blogs that require re-tagging?**\[\\'<div class="dm-section-hero--question\\\\\\\\\\\\\\\_\\\\\\\\\\\\\\\_body">\\\\n <p>**Most of my blogs have migrated without a primary tag. I can work through them using the list from my profile page, but the further through the list I get the harder it is to keep track of those I\*\*\\\\\\**'ve done and those I haven**\\\\\\\*\*'t . Is there an easy way to get a list of my blogs that need re-tagging? That would make the job a whole lot easier...**</p><p>**Steve.**</p>\\\\n </div>\\'\]**' what I do: soup = BeautifulSoup(raw_text) text = soup.get_text() text = re.sub(r'[\ \n]{2,}', ' ', text) text = re.sub(r'[\t\r\n]', '', text) text = re.sub(r'\n', ' ', text) text.replace("\\n", "") What I get: "Is there an easy way to get a list of my blogs that require re-tagging?**\['\\\\n** Most of my blogs have migrated without a primary tag. I can work through them using the list from my profile page, but the further through the list I get the harder it is to keep track of those **I\\\\\\'ve** done and those I **haven\\\\\\'t** . Is there an easy way to get a list of my blogs that need re-tagging? That would make the job a whole lot easier...Steve.**\\\\n '\]**" What I want: "Is there an easy way to get a list of my blogs that require re-tagging? Most of my blogs have migrated without a primary tag. I can work through them using the list from my profile page, but the further through the list I get the harder it is to keep track of those I 've done and those I haven't . Is there an easy way to get a list of my blogs that need re-tagging? That would make the job a whole lot easier...Steve."
1
t3_pt6fnm
1,632,313,472
LanguageTechnology
Is there any white paper or research paper explaining the architecture of any NLP engine like Dialogflow or LUIS?
I tried to find on Google but couldn't find any research paper related to i design implementation of any NLP engine like Dialogflow, LUIS etc. I would be really thankful if someone could provide. Basically I need to complete a POC for designing an NLP engine from scratch.
0.84
t3_pt2on7
1,632,297,216
LanguageTechnology
Recognition of Resume and onvoice documents
Hello, i need help I am asked in my internship to detect only invoice and resume documents from large amount of documents that contains numerous types. I am asked to build a model with NLP, so i should extract text from image or PDF than i begin the process of detection/classification To be honest, i don't know from where i can start, i find it difficult task Can any one help me and put me in the road
0.99
t3_psvb0k
1,632,268,288
LanguageTechnology
Natural language processing course - Looking for feedback
I’m Sourabh, I lead one of the core Tensorflow teams at Google Brain and worked on data products at Coursera with Andrew Ng. Kaushik Rangadurai, ML Engineer at Facebook and I are leading a live, cohort based course on NLP starting November 1st. [https://corise.com/course/natural-language-processing](https://corise.com/course/natural-language-processing). We wanted to share what we’ve learned in machine learning over the years. You can join the first run of the course (capped at about 30 students) below. If you’re open to giving feedback on the class on how we can do better, happy to give a discount.
0.82
t3_pssqci
1,632,260,096
LanguageTechnology
Catogorize the Data- Topic Modelling algorithm
Team, I am new to NLP , there is a requirement asked for me to categorize the data, Data which i have is just one column data in excel and these are values are user daily search criteria on google browser. simply the search text done on google browser. ​ I need to run a LDA (topic mapping algorithm ) on this data , so that the algorithm will classify them into some meaningful categories. Thanks,
0.4
t3_psm07i
1,632,240,896