{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Hadoop海量数据实现原理\n",
    "### Hadoop分布式集群：\n",
    "      (1)数据持久化存储   （2）数据运算与操作\n",
    "      Hadoop是依据Map-Reduce原理，用java语言实现的分布式处理机制\n",
    "      Map-Reduce是Hadoop中的一个数据运算核心模块\n",
    "### Map-Reduce解决的问题：\n",
    "    （1）在多节点上冗余地存储数据，以保证数据的持续性\n",
    "    （2）将计算移向数据端，以最大程度减少数据移动\n",
    "    （3）简单的程序模型，隐藏所有的复杂度\n",
    "####  分布式文件存储系统：提供全局的文件命名空间、冗余度和可获得性。例如hadoop的HDFS\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Map-Reduce思想\n",
    "### Hadoop组成：\n",
    "    （1）Hadoop Common: 工具包，为其它Hadoop模块提供基础设施\n",
    "    （2）Hadoop HDFS：分布式文件系统，对海量数据的存储\n",
    "    （3）Map-Reduce：分布式处理策略，计算模型，对海量数据进行处理\n",
    "    （4）Yarn: 分布式资源管理，调度（信息中心）\n",
    "### Map-Reduce结构：\n",
    "    （1）Map: 逐个文件逐行扫描，扫描的同时抽取出我们感兴趣的内容（Keys），读取输入文本产生一系列键值对\n",
    "    （2）Group by keys: 排序和洗牌\n",
    "    （3）Reduce: 聚合，总结，过滤或转换，写入结果。（收集和统计对应同一个Key的value并输出）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# map的实现方法\n",
    "\n",
    "import os \n",
    "import sys\n",
    "import re \n",
    "\n",
    "handler = sys.stdin         # 底层输入的文本文件\n",
    "for line in handler:\n",
    "    if not line:\n",
    "        continue\n",
    "    terms = line.strip().split(' ')\n",
    "    for i in terms:\n",
    "        print(i)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# reduce的实现方法\n",
    "\n",
    "import os \n",
    "import sys\n",
    "import re \n",
    "\n",
    "handler = sys.stdin\n",
    "word_dict = {}\n",
    "for line in handler:\n",
    "    if not line:\n",
    "        continue\n",
    "    terms = line.strip().split(' ')\n",
    "    for i in terms:\n",
    "        if i in word_dict:\n",
    "            word_dict[i] += 1\n",
    "        else:\n",
    "            word_dict = 1\n",
    "for j in word_dict:      \n",
    "    print(j,word_dict[j])                # 得到字典之后，打印字典键值对\n",
    "        "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Map的并行化优于Reduce并行化，可对Map方法进行优化，提高并行化效率\n",
    "#### 输入和输出都被存储在HDFS上\n",
    "#### 实际运行过程中，一个map-reduce产生的结果，很有可能作为另一个map-reduce任务的输入（方法的嵌套）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 主节点（主要负责系统的协调）\n",
    "    DataNode与NameNode之间的心跳机制：datanode启动后向namenode注册，注册成功后，每一小时上报所有块信息给主节点，心跳每三秒一次返回结果，超过十分钟没有收到心跳，则认为该节点不可用"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "   * M个Map任务和R个Reduce任务\n",
    "   * 实际操作经验法则：\n",
    "   \n",
    "     1.通常情况下我们会让M远大于集群中的节点数，一个节点上多个Map任务\n",
    "     \n",
    "     2.通常设置一个分布式文件系统块对应一个Map任务\n",
    "     \n",
    "     3.提升动态加载平衡，同时加速节点故障时的任务恢复\n",
    "   * 通常R比M要小\n",
    "     "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "   * map工作主要修改key,reduce主要修改values\n",
    "   * map-reduce 的循环迭代是在磁盘级别的操作，所以开销会很大，而spark是在内存级别的操作，所以对内存开销会很大，但速度很快。\n",
    "   * spark稳定不如map，spark只读一次\n",
    "   * 做特征处理用map-reduce,导出的特征用于机器学习训练的用spark建模，用hadoop streaming 方便任何语言编写map-reduce"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Hive\n",
    "### 介绍：Hive是基于hadoop的一个数据仓库工具，可以将结构化的数据文件映射为一张数据库表，并提供简单的sql查询功能，可以将sql语句转换为mapreduce任务进行运行。其优点是学习成本低，可以通过类sql语句快速实现简单的mapreduce统计，十分适合数据仓库的统计分析。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "   * 无hive：使用者 -> mapreduce -> hadoop数据（需要会mapreduce）\n",
    "   * 有hive: 使用者 -> HQL -> hive -> mapreduce -> hadoop数据（使用sql）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 关联规则\n",
    "    是当前数据挖掘研究的主要方法之一，它反映一个事物与其他事物之间的相互依存性和关联性\n",
    "    规则：样本和样本之间的关联性\n",
    "    每一个样本叫一个项目，项目的集合叫事务，事务中有意义的项目集合叫做项集，\n",
    "    项集出现的频率叫支持度，如果项集的支持度超过用户给定的最小支持度阈值，就称该项集是频繁项集。\n",
    "    关联规则的信任度（相当于条件概率）：P(Y|X)\n",
    "    强关联规则：就是支持度和信任度分别满足用户给定阈值的规则\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Apriori 算法\n",
    "    （1）通过迭代，检索出事物数据库中的所有频繁项集，即支持度不低于用户设定的阈值的项集\n",
    "    （2）利用频繁项集构造出满足用户最小信任度的规则\n",
    "     剪枝步 --> 连接步 --> 剪枝步 --> 连接步...... 执行k次"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 作业二"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['ABCDEF', 'BACDE', 'CABE', 'DABE', 'EABCD', 'FA']"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X = ['ABCDEF'] + ['BACDE'] + ['CABE'] + ['DABE'] + ['EABCD'] + ['FA'] \n",
    "X"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "def Map_s(x):\n",
    "    w = ['ABCDEF']\n",
    "    for word in x:\n",
    "        for f1 in word:\n",
    "            if f1 not in w:\n",
    "                continue\n",
    "            if f1 != word[0]:\n",
    "                print(word[0],':',f1)\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
