{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "自然语言处理是计算机科学领域与人工智能领域中的一个重要方向，它研究能实现人与计算机之间用自然语言进行有效通信的各种理论和方法。NLP 融合了语言学、计算机科学、数学等多个学科，旨在让计算机理解、分析和生成人类语言。TensorFlow提供了丰富的工具集来构建和训练深度学习模型，包括自然语言处理（NLP）模型。提供了各种API和库，如 TensorFlow Text、TensorFlow Hub、Keras（TensorFlow的高级API）等，这些工具和库大大简化了NLP任务的实现。",
   "id": "8a9401525960eb99"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "TensorFlow在NLP领域的应用广泛，可以用于文本分类、情感分析、命名实体识别、机器翻译、文本生成等多种任务。以下是一些TensorFlow中NLP任务最常用的方法：",
   "id": "79793feeae008f5a"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "1.文本嵌入:tf.keras.layers.Embedding。\n",
    "\n"
   ],
   "id": "ebfc6caeb3eb8862"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "\n",
    "用途：\n",
    "Embedding 层用于将正整数（索引）转换为固定大小的密集向量。它常用于自然语言处理任务中，将单词索引转换为词嵌入向量。"
   ],
   "id": "605c5ac7dbf03a18"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "示例代码：",
   "id": "d901b3d06275c328"
  },
  {
   "cell_type": "code",
   "id": "initial_id",
   "metadata": {
    "collapsed": true,
    "ExecuteTime": {
     "end_time": "2024-06-16T17:39:10.550688Z",
     "start_time": "2024-06-16T17:39:10.266736Z"
    }
   },
   "source": [
    "import tensorflow as tf  \n",
    "from tensorflow.keras.models import Sequential  \n",
    "from tensorflow.keras.layers import Embedding  \n",
    "import numpy as np  \n",
    "    \n",
    "vocab_size = 1000  \n",
    "embedding_dim = 64  \n",
    "input_length = 10  \n",
    "  \n",
    "model = Sequential([  \n",
    "    Embedding(input_dim=vocab_size, output_dim=embedding_dim, input_length=input_length)  \n",
    "])  \n",
    "  \n",
    "\n",
    "test_data = np.random.randint(low=0, high=vocab_size, size=(3, input_length))  \n",
    "    \n",
    "predictions = model.predict(test_data)  \n",
    "print(predictions.shape)  \n",
    "\n",
    "test_cases = [  \n",
    "    np.random.randint(low=0, high=vocab_size, size=(1, input_length)),  \n",
    "    np.random.randint(low=0, high=vocab_size, size=(5, input_length)),  \n",
    "]  \n",
    "  \n",
    "for case in test_cases:  \n",
    "    try:  \n",
    "        print(model.predict(case).shape)  \n",
    "    except ValueError as e:  \n",
    "        print(f\"Error for test case: {e}\")"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1/1 [==============================] - 0s 55ms/step\n",
      "(3, 10, 64)\n",
      "1/1 [==============================] - 0s 27ms/step\n",
      "(1, 10, 64)\n",
      "1/1 [==============================] - 0s 24ms/step\n",
      "(5, 10, 64)\n"
     ]
    }
   ],
   "execution_count": 19
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "测试代码：",
   "id": "e82b3f962cfefd7b"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "2.tf.keras.preprocessing.text.Tokenizer\n",
    "\n"
   ],
   "id": "a188bc9c2b5f755b"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "用途：\n",
    "Tokenizer 用于将文本数据转换为整数序列，可以自动计算词汇表的大小和构建索引映射。"
   ],
   "id": "b74a396325d0cd46"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-06-16T17:33:49.838906Z",
     "start_time": "2024-06-16T17:33:49.824605Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from tensorflow.keras.preprocessing.text import Tokenizer  \n",
    "  \n",
    "texts = [  \n",
    "    '我喜欢看电影',  \n",
    "    '电影很有趣',  \n",
    "    '我不喜欢无聊的电影'  \n",
    "]  \n",
    "  \n",
    "tokenizer = Tokenizer(num_words=100)  \n",
    "tokenizer.fit_on_texts(texts)  \n",
    "\n",
    "sequences = tokenizer.texts_to_sequences(texts)  \n",
    "print(sequences)  \n",
    "\n",
    "test_texts = [  \n",
    "    '电影非常棒',  \n",
    "    '我喜欢看有趣的电影',  \n",
    "    '无聊的电影我不喜欢'  \n",
    "]  \n",
    "  \n",
    "test_sequences = tokenizer.texts_to_sequences(test_texts)  \n",
    "print(test_sequences)\n"
   ],
   "id": "d100565196d28ae",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[1], [2], [3]]\n",
      "[[], [], []]\n"
     ]
    }
   ],
   "execution_count": 16
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "3.tf.keras.layers.LSTM\n",
    "\n"
   ],
   "id": "3a6cbc3bf05c36c0"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "用途：\n",
    "LSTM层是一种特殊的循环神经网络（RNN）层，能够学习长期依赖关系。它常用于处理序列数据，如文本、时间序列等。"
   ],
   "id": "697edba5892de19d"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "4.tf.keras.layers.Dense\n",
   "id": "7f11d752ca967b2a"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "用途：Dense层是神经网络中最常见的层类型之一。它实现了对输入数据的加权和，并可选地应用激活函数。它通常用于分类或回归任务的输出层。",
   "id": "19e764147369fce8"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-06-16T17:36:02.180826Z",
     "start_time": "2024-06-16T17:36:01.575818Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from tensorflow.keras.models import Sequential  \n",
    "from tensorflow.keras.layers import LSTM, Dense  \n",
    "  \n",
    "model = Sequential([  \n",
    "    LSTM(64, input_shape=(10, 128)), \n",
    "    Dense(10, activation='softmax') \n",
    "])  \n",
    "    \n",
    "model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])  \n",
    "model.summary()  \n",
    "   \n"
   ],
   "id": "bcf5e066d731d477",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential_1\"\n",
      "_________________________________________________________________\n",
      " Layer (type)                Output Shape              Param #   \n",
      "=================================================================\n",
      " lstm (LSTM)                 (None, 64)                49408     \n",
      "                                                                 \n",
      " dense (Dense)               (None, 10)                650       \n",
      "                                                                 \n",
      "=================================================================\n",
      "Total params: 50,058\n",
      "Trainable params: 50,058\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "execution_count": 17
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "5.tf.keras.losses.SparseCategoricalCrossentropy",
   "id": "b2ab5602d80f988"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "用途：SparseCategoricalCrossentropy函数用于计算多分类问题的交叉熵。",
   "id": "4afbcf803ddabd53"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "函数原型",
   "id": "451dda2193d60277"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": [
    "tf.keras.losses.SparseCategoricalCrossentropy(\n",
    "    from_logits=False,\n",
    "    reduction=losses_utils.ReductionV2.AUTO,\n",
    "    name='sparse_categorical_crossentropy'\n",
    ")"
   ],
   "id": "a4de005ac04dabab"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "示例代码",
   "id": "b369664d93ab4ae9"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": [
    "# y_true为整数标签形式\n",
    "y_true = [2, 1, 1]\n",
    "y_pred = [[0.5, 0.3, 0.2], [0.3, 0.6, 0.1], [0.4, 0.4, 0.2]]\n",
    "loss = tf.keras.losses.SparseCategoricalCrossentropy()\n",
    "loss_calc = loss(y_true, y_pred)\n",
    "loss_calc\n",
    "<tf.Tensor: shape=(), dtype=float32, numpy=1.0121847>\n",
    "\n",
    "# 对比\n",
    "# y_true为one-hot形式\n",
    "y_true = [[0, 0, 1], [0, 1, 0], [0, 1, 0]]\n",
    "y_pred = [[0.5, 0.3, 0.2], [0.3, 0.6, 0.1], [0.4, 0.4, 0.2]]\n",
    "loss = tf.keras.losses.CategoricalCrossentropy()\n",
    "loss_calc = loss(y_true, y_pred)\n",
    "loss_calc\n",
    "<tf.Tensor: shape=(), dtype=float32, numpy=1.0121847>\n"
   ],
   "id": "ab330aa316def8f3"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
