<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">
    <link rel="icon" href="../images/logo/logo.png" type="image/x-icon">
    <link rel="shortcut icon" href="../images/logo/logo.png"
          type="image/x-icon">
    <title>浏阳德塔软件开发有限公司 女娲计划</title>
</head>
<body style="Max-width: 700px; text-align:center; margin:auto;">
<div style="text-align:left; Max-width: 680px; margin-left:15px;">
    <a href="../">上一页</a>
    <br/>
    <br/>
    <br/>第一章_德塔自然语言图灵系统
    <br/> 作者: 罗瑶光, Author:Yaoguang.Luo<br/>
    <br/> 基础应用: 元基催化与肽计算 编译机的语言分析机
    <br/>
    关于距离的描述, <br/>
    罗瑶光先生个人认为文中的词汇不同属性和不同类别的词汇的位置距离在计算主要描述语句的重心所在位置后,
    可以更好的归纳文章的中心思想, 我接着举例
    <br/>
    如果文中出现菜刀, 顶板, 油锅, 五花肉, 香料, 这些词汇, 如果文中大量的出现五花肉的词汇,
    阅读者和计算机便能理解这篇文章描述的是酒店厨师的烹饪食用肉类的的技术类文章. 当然, 如果
    文中大量的出现香料的词汇, 阅读者和计算机便能理解这篇文章描述的是酒店厨师的烹饪过程
    中关于香料的使用方法介绍的的技术类文章. <br/>
    <br/>
    接着举例, 如果相同的香料 的词汇, 如 品牌陈醋, 这个词汇, 在全文1000字文章5段落中,
    品牌陈醋在文中 出现在第1段, 第2段, 第4段, 第5段, 出现了30多次, 其中第4段出现了20次,
    这时候词距的作用可以提高 品牌陈醋的重心价值, 说明酒店厨师的烹饪过程中关于香料的使用方法
    介绍的的技术类文章. 香料的具体使用方法在第四段. <br/>
    <br/>
    欧基里德熵的价值能更好的观测这些品牌陈醋 的词距关联的过程轨迹, 进行边缘囊括, 举例如果文中
    句型是 品牌陈醋 + 水饺 + 品牌陈醋+ 五花肉. 那么这个水饺(RNN比重虽然低)的在词距的
    轨迹熵中计算 DNN中心计算中比重将会提高. 五花肉因为出现在末尾, (越末尾位置比较大,
    这里我设计的方法出了问题, 因为我在读els的作文经常 把conclusion 写在最后面, 我个人认为
    最后的段落是用来总结的. 不代表全人类思想, 今天20200402又思考了这个问题, 觉得依旧有
    合情的价值, 因为在一些写作风格中, 如果一开始就来个outlook进行中心论点表达, 然后再分布
    论证, 最后一个conclusion段落进行总结, 虽然outlook出现的价值词汇RNN采集积分比较低,
    但是词距也相应变的巨大, 最后的mean求解依旧占有大比重, 不会轻易偏离预想结果.
    ) <br/>
    描述人 罗瑶光 <br/>
    An Implement of Distance of POS. <br/>
    Mr. Yaoguang Luo considered the distance of POS which means the weight
    of lexicons. Those factors about the reflection of different attributes
    and the position of different classes, which could make a calculation
    of Mind. Then continuing examples as below. <br/>
    <br/>
    Assumed their paper appeared five words: Kitchen Knife, Chopping Board,
    Wok, Streaky-Meat and Spicy-Condiment. Resulted a higher frequency of
    lexical appearance, was Streaky-Meat. It means the humanoid computer
    could read and find a potential information from this paper, would
    easily to know this paper was about a definite essay on Cooking Science
    and Technology, which mainly made a presentation of meat. Similarly
    Resulted the higher frequency of lexical appearance, was
    Spicy-Condiment. which mainly made a presentation of Spicy-Formula.
    Let’s continued examples as below. <br/>
    <br/>
    Assumed their Spicy-Condiment contained a mature vinegar, which was a
    higher frequent lexicon. 'Vinegar' appeared at paragraph of 1, 2, 4 and
    5, especially at 4. The accomplishment of distance of the same lexicon,
    could scale the weight of 'Vinegar'. Then humanoid computer would
    easily to know this paper was about an essay of Cooking Science and
    Technology. Which mainly an introduction of 'Vinegar' in presentation
    of Spicy-Formula. Especially at paragraph 4. <br/>
    <br/>
    Euclidean KNN could trace an observation of frequent lexical distance.
    For example, Deta RNN computing's, Assumed It sequentially input
    1'Vinegar', 2'Dumpling', 3'Vinegar' and 4'Streaky-Meat', will result
    the Deta rank of DNN was higher than Deta rank of RNN by 'Dumpling'.
    And also, we could find that the Deta ratio of DNN of 'Streaky-Meat'
    was highly. <br/>
    Author: YaoguangLuo 稍后翻译语法 <br/>
    <br/>

</div>
</body>