{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# NLP Concepts\n",
    ">古人学问无遗力，少壮工夫老始成。\n",
    "\n",
    ">纸上得来终觉浅，绝知此事要躬行。\n",
    "\n",
    "> ——陆游《冬夜读书示子聿》 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "https://machinelearningmastery.com/what-are-word-embeddings/ <br/>\n",
    "先了解一下词向量\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**词向量**是一种word representation(based on semantic similarities)\n",
    "> A word embedding is a learned representation for text where words that have the same meaning have a similar representation.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## What's word vectors\n",
    "\n",
    "> One of the benefits of using dense and low-dimensional vectors is computational: the majority of neural network toolkits do not play well with very high-dimensional, sparse vectors. … The main benefit of the dense representations is generalization power: if we believe some features may provide similar clues, it is worthwhile to provide a representation that is able to capture these similarities.\n",
    "\n",
    "— Page 92, Neural Network Methods in Natural Language Processing, 2017.\n",
    "  \n",
    "//Markdown的语法规范是空格+空格+回车换行哈哈哈！\n",
    "\n",
    "\n",
    "Each word is mapped to one vector and the vector values are learned in a way that resembles a neural network, and hence the technique is often lumped into the field of deep learning.\n",
    "  \n",
    "   The number of features … is much smaller than the size of the vocabulary\n",
    "\n",
    "---\n",
    "\n",
    "   //Actually there's a linguistic theory behind the method \n",
    "\n",
    "   > There is deeper linguistic theory behind the approach, namely the “distributional hypothesis” by Zellig Harris that could be summarized as: words that have similar context will have similar meanings. For more depth see Harris’ 1956 paper “Distributional structure ”.\n",
    "\n",
    "   This notion of letting the usage of the word define its meaning can be summarized by an oft repeated quip by John Firth:\n",
    "\n",
    "  > You shall know a word by the company it keeps!\n",
    "\n",
    "— Page 11, “A synopsis of linguistic theory 1930-1955“, in Studies in Linguistic Analysis 1930-1955, 1962.\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Embedding Layer\n",
    "\n",
    "An embedding layer, for lack of a better name, is a word embedding that is learned jointly with a neural network model on a specific natural language processing task, such as language modeling or document classification.\n",
    "\n",
    "It requires that document text be cleaned and prepared such that each word is one-hot encoded. The size of the vector space is specified as part of the model, such as 50, 100, or 300 dimensions. The vectors are initialized with small random numbers. The embedding layer is used on the front end of a neural network and is fit in a supervised way using the Backpropagation algorithm.\n",
    "\n",
    "> … when the input to a neural network contains symbolic categorical features (e.g. features that take one of k distinct symbols, such as words from a closed vocabulary), it is common to associate each possible feature value (i.e., each word in the vocabulary) with a d-dimensional vector for some d. These vectors are then considered parameters of the model, and are trained jointly with the other parameters.\n",
    "\n",
    "— Page 49, Neural Network Methods in Natural Language Processing, 2017.\n",
    "\n",
    "-----\n",
    "\n",
    "The one-hot encoded words are mapped to the word vectors. If a multilayer Perceptron model is used, then the word vectors are concatenated before being fed as input to the model. If a recurrent neural network is used, then each word may be taken as one input in a sequence.\n",
    "\n",
    "这一句没太看懂……\n",
    "\n",
    "-----\n",
    "\n",
    "\n",
    "This approach of learning an embedding layer requires a lot of training data and can be slow, but will learn an embedding both targeted to the specific text data and the NLP task.\n",
    "\n",
    "\n",
    "\n",
    "总的说来，词嵌入（就是只把词向量嵌入问题空间）层是参数化神经网络的一部分\n",
    "\n",
    "如果我们有K个单词的词库，我们就可以定义词向量空间的维度 D，之后每一个词的向量表示都将和模型的其他参数一起参与训练。理论上讲可以同时优化词向量对于语料的针对性以及ML网络的适应性"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Word2Vec\n",
    "\n",
    "Two different learning models were introduced that can be used as part of the word2vec approach to learn the word embedding; they are:\n",
    "\n",
    "Continuous Bag-of-Words, or CBOW model.\n",
    "Continuous Skip-Gram Model.\n",
    "The CBOW model learns the embedding by predicting the current word based on its context. The continuous skip-gram model learns by predicting the surrounding words given a current word.\n",
    "\n",
    "The continuous skip-gram model learns by predicting the surrounding words given a current word.\n",
    "\n",
    "Word2Vec Training Models\n",
    "Word2Vec Training Models\n",
    "Taken from “Efficient Estimation of Word Representations in Vector Space”, 2013\n",
    "\n",
    "Both models are focused on learning about words given their local usage context, where the context is defined by a window of neighboring words. This window is a configurable parameter of the model.\n",
    "\n",
    "> The size of the sliding window has a strong effect on the resulting vector similarities. Large windows tend to produce more topical similarities […], while smaller windows tend to produce more functional and syntactic similarities.\n",
    "\n",
    "//这句话的确很点睛了。词向量其实是configurable 的。也就是说Word2Vec实际上是一类技术\n",
    "\n",
    "<img src='https://3qeqpr26caki16dnhd19sv6by6v-wpengine.netdna-ssl.com/wp-content/uploads/2017/08/Word2Vec-Training-Models.png'/>\n",
    "\n",
    "\n",
    "//Google 的 BERT 实际上也做了类似的上流任务\n",
    "\n",
    "— Page 128, Neural Network Methods in Natural Language Processing, 2017.\n",
    "\n",
    "The key benefit of the approach is that high-quality word embeddings can be learned efficiently (low space and time complexity), allowing larger embeddings to be learned (more dimensions) from much larger corpora of text (billions of words)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## GloVe\n",
    "The Global Vectors for Word Representation, or GloVe, algorithm is an extension to the word2vec method for efficiently learning word vectors, developed by Pennington, et al. at Stanford.\n",
    "\n",
    "Classical vector space model representations of words were developed using matrix factorization techniques such as Latent Semantic Analysis (LSA) that do a good job of using global text statistics but are not as good as the learned methods like word2vec at capturing meaning and demonstrating it on tasks like calculating analogies (e.g. the King and Queen example above).\n",
    "\n",
    "GloVe is an approach to marry both the global statistics of matrix factorization techniques like LSA with the local context-based learning in word2vec.\n",
    "\n",
    "----\n",
    "\n",
    "LSA -Latent Semantic Analysis 潜在语义分析，基于Query 查询文档Documents\n",
    "【https://blog.csdn.net/u011630575/article/details/79044324\n",
    "\n",
    "https://blog.csdn.net/callejon/article/details/49811819\n",
    "】\n",
    "\n",
    "word2vec — 语义嵌入\n",
    "\n",
    "----\n",
    "\n",
    "Rather than using a window to define local context, GloVe constructs an explicit word-context or word co-occurrence matrix using statistics across the whole text corpus. The result is a learning model that may result in generally better word embeddings.\n",
    "\n",
    "> GloVe, is a new global log-bilinear regression model for the unsupervised learning of word representations that outperforms other models on word analogy, word similarity, and named entity recognition tasks.\n",
    "\n",
    "— GloVe: Global Vectors for Word Representation, 2014."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Using Word Embeddings 如何使用词向量\n",
    "\n",
    "You have some options when it comes time to using word embeddings on your natural language processing project.\n",
    "\n",
    "This section outlines those options.\n",
    "\n",
    "1. Learn an Embedding\n",
    "\n",
    "You may choose to learn a word embedding for your problem.\n",
    "\n",
    "This will require a large amount of text data to ensure that useful embeddings are learned, such as millions or billions of words.\n",
    "\n",
    "You have two main options when training your word embedding:\n",
    "\n",
    "- Learn it Standalone, where a model is trained to learn the embedding, which is saved and used as a part of another model for your task later. This is a good approach if you would like to use the same embedding in multiple models.\n",
    "- Learn Jointly, where the embedding is learned as part of a large task-specific model. This is a good approach if you only intend to use the embedding on one task.\n",
    "\n",
    "2. Reuse an Embedding\n",
    "\n",
    "It is common for researchers to make pre-trained word embeddings available for free, often under a permissive license so that you can use them on your own academic or commercial projects.\n",
    "\n",
    "For example, both word2vec and GloVe word embeddings are available for free download.\n",
    "\n",
    "These can be used on your project instead of training your own embeddings from scratch.\n",
    "\n",
    "You have two main options when it comes to using pre-trained embeddings:\n",
    "\n",
    "- Static, where the embedding is kept static and is used as a component of your model. This is a suitable approach if the embedding is a good fit for your problem and gives good results.\n",
    "- Updated, where the pre-trained embedding is used to seed the model, but the embedding is updated jointly during the training of the model. This may be a good option if you are looking to get the most out of the model and embedding on your task.\n",
    "Which Option Should You Use?\n",
    "Explore the different options, and if possible, test to see which gives the best results on your problem.\n",
    "\n",
    "Perhaps start with fast methods, like using a pre-trained embedding, and only use a new embedding if it results in better performance on your problem.\n"
   ]
  }
 ],
 "metadata": {
  "file_extension": ".py",
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  },
  "mimetype": "text/x-python",
  "name": "python",
  "npconvert_exporter": "python",
  "pygments_lexer": "ipython3",
  "version": 3
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
