package com.vibes.actors

import akka.actor.{Actor, ActorRef, Props}
import akka.util.Timeout
import com.typesafe.scalalogging.LazyLogging
import com.vibes.actions._
import com.vibes.utils.{VConf, VExecution}
import org.joda.time._

import scala.collection.immutable.SortedSet
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Promise
import scala.concurrent.duration._
import scala.util.Random

/**
  * This class represents the coordinator that controls the nodes and the execution order in the network.
  * 此类表示控制网络中的节点和执行顺序的协调器。
  *
  * give permission to multiple nodes to fast forward their propagation executables for transactions
  * 例如，为了能够利用并行性，MasterActor可以快进多个PropagateTransaction(又名)，允许多个节点快进其交易的传播作业。
  *
  * because the order in which they are propagated does not matter as long as there is no other executable between them.
  * 因为只要它们之间没有其他可执行作业，它们被传播的顺序就无关紧要。
  * Meaning that sequential executables of type PropagateTransactions are simply executed in parallel.
  * 这意味着PropagateTransaction类型的顺序作业可以并行执行。
  * Which means recipients of transaction might not be ordered by the timestamp, but the node will always receive
  * all due to transactions for a block / mempool within a correct time
  * 这意味着交易的接收者可能不会按时间戳排序，但节点将始终在正确的时间内收到块/内存池的所有交易
  *
  * TL:DR; We achieve parallelism by allowing transactions within a single block to be arbitrary reordered
  * Multiple actors are able to propagate multiple transactions between each other w/o asking for permission from the
  * MasterActor each time, because it already has issued them, multiple ones, as permitted.
  * 我们通过允许对单个块内的事务进行任意重新排序来实现并行多个参与者能够在彼此之间传播多个事务，
  * 而无需每次都向主执行元请求权限，因为它已经在允许的情况下发布了多个事务。
  *
  * Note that in the original bitcoin implementation a similar type of anarchy exists. If there is no solution
  * to the current block the node could change the order of transactions as it it suits him to start looking for a new
  * nonce / solution.
  * 请注意，在最初的比特币实现中，存在类似类型的无政府状态。如果当前块没有解决方案（没有合适的nause值），则节点可以更改交易的顺序，因为它适合他开始寻找新的随机数/解决方案。
  *
  * The implications are whenever nodes need to mine a block, the MasterActor ensures all transactions are completed
  * that are permitted, empties all workRequests and queries all nodes for a new workRequest so it can get back the
  * system to a synchronized state.
  * *其含义是，每当节点需要挖掘块时，MasterActor确保完成所有允许的交易，清空所有workRequest并查询所有节点以查找新的workRequest，以便使系统返回到同步状态。
  *
  * An alternative, cleaner solution, that I've tried in the beginning was the following:
  * Reset all the workRequests after an execution has taken place and ask all nodes for their workRequests once again
  * However, the MasterActor becomes the bottleneck because only 1 actor at a time can execute an operation,
  * while blocking all others. The current solution blocks only in particular cases, therefore is much faster,
  * but messier.
  * 我在开始时尝试过的另一种更简洁的解决方案是：在执行之后重置所有的workRequests，并再次向所有节点请求其workRequests。
  * 然而，MasterActor成为瓶颈，因为一次只有一个参与者可以执行一个操作，而阻止所有其他参与者。因此，当前的解决方案仅在特定情况下阻塞，速度要快得多，但更加混乱。
  */
class MasterActor extends Actor with LazyLogging {
  implicit val timeout: Timeout = Timeout(20.seconds)

  /**
    * NodeActors ask for work permission and MasterActors issues them, so that they can fast forward.
    * Only issue a permission once all currentNodeActors have voted.
    * NodeActors请求工作许可，MasterActors颁发这些许可，以便它们可以快进。仅在所有currentNodeActor投票后才发布权限。
    */
  private var workRequests: SortedSet[VExecution.WorkRequest] = SortedSet.empty

  /**
    * This is a simple, but not robust solution to solve following problems: MasterActor must have reference to all
    * currentNodeActors to distribute work and also make sure they have casted their work requests.
    * 这是一个简单但不健壮的解决方案，用于解决以下问题：MasterActor必须引用所有currentNodeActor来分发工作，并确保它们已强制转换其工作请求。
    *
    * Note: A better solution would be for them to register / deregister themselves via messages, but I did not bother
    * doing it in a prototype. So instead once an Actor votes, it is added to the set of currentNodeActors until
    * VConf.numberOfNodes has been reached.
    * 注意：对他们来说，更好的解决方案是通过消息注册/注销自己，但我没有费心在原型中这样做。
    * 因此，一旦某个Actor投票，它就会被添加到currentNodeActors集合中，直到到达VConf.numOfNodes为止。
    */
  private var currentNodeActors: Set[ActorRef]         = Set.empty
  private var numberOfWorkRequests: Map[ActorRef, Int] = Map.empty //看清楚这个k-v是actorRef->int, 所以里面存的是这个子actor的任务数？

  //用于详细记录，以了解超大规模模拟中发生的情况
  private var detailedLoggingEnabled: Boolean = false

  //确保以大约lastNeighbourDiscoveryDate的时间间隔更新邻居表
  private var lastNeighbourDiscoveryDate = DateTime.now

  /**
    * Returns final result of computation to be delivered to the client. Note that here the ask ? pattern should
    * be used instead of Promise, because this solution would break for different JVMs communicating between
    * each other, cause currently I use reference to the promise. Instead, the sender should be passed around
    * combined with the ask pattern.
    * 返回要传递给客户端的最终计算结果。注意到这里的要求了吗？应该使用模式而不是Promise，
    * 因为这种解决方案会破坏不同JVM之间的通信，因为目前我使用的是Promise的引用。相反，应该结合ASK模式传递发送者。
    */
  private var currentPromise = Promise[ReducerIntermediateResult]()

  /**
    * Injected Actors that are children of the MasterActor. Again, they should register themselves via messaging
    * instead, and proper error handling should be implemented if intended to use in a distributed setting
    * 注入的Actors，它们是MasterActor的子级。同样，它们应该通过消息传递来注册自己，如果打算在分布式设置中使用，则应该实现proper error handling
    */
  val discoveryActor: ActorRef =
    context.actorOf(DiscoveryActor.props(VConf.numberOfNeighbours), "Discovery")
  //Props是一个配置类, 用于在创建角色时指定选项。它是不可变的, so线程安全的和可共享的。
  val reducerActor: ActorRef =
    context.actorOf(ReducerActor.props(self), "Reducer")
  val nodeRepoActor: ActorRef =
    context.actorOf(NodeRepoActor.props(discoveryActor, reducerActor), "NodeRepo")

  /**
    * Delegate work to NodeRepo to register NodeActors
    * 将工作委托给NodeRepo以注册NodeActor
    */
  //(1 to VConf.numberOfNodes).foreach(_ => nodeRepoActor ! NodeRepoActions.RegisterNode)
  val numberOfListener: Int = (VConf.numberOfNodes * VConf.rateOfListener / 100).toInt
  logger.debug(s"numberOfListener:...${numberOfListener}")
  (1 to numberOfListener).foreach(_ => nodeRepoActor ! NodeRepoActions.RegisterListener)
  (1 to VConf.numberOfNodes - numberOfListener).foreach(_ => nodeRepoActor ! NodeRepoActions.RegisterNode)

  override def preStart(): Unit = {
    logger.debug(s"MasterActor started ${self.path}")
  }

  override def receive: Receive = {
    case MasterActions.Start =>
      /**
        * Again, since strictly message-based communication would be much more involved, for the prototype I just
        * assume that every NodeActor would be alive after X seconds and let the DiscoveryActor Announce the neighbours
        * and start
        * 同样，由于严格地基于消息的通信将更加复杂，对于原型，我只是假设每个NodeActor将在X秒后存活，并让DiscoveryActor通告邻居并开始
        */
      //对Actor设置Scheduler，停顿3秒钟再发送，只发一次！！
      context.system.scheduler.scheduleOnce(3000.millisecond) {
        logger.debug(s"Announce Neighbours ${self.path}")
        discoveryActor ! DiscoveryActions.AnnounceNeighbours //间隔3秒，向discoverActor发送AnnounceNeighbours消息
      }

      context.system.scheduler.scheduleOnce(7000.millisecond) {
        logger.debug(s"Announce Start ${self.path}")
        nodeRepoActor ! NodeRepoActions.AnnounceStart(DateTime.now)
      }

      sender ! currentPromise

    case MasterActions.FinishEvents(events) =>
      logger.debug("FINISH EVENTS...")
      currentPromise.success(events)

    case MasterActions.CastWorkRequest(workRequest) =>
      /**
        * because some nodes receive the right to workRequest more than once (for instance if we fast forward multiple
        * transactions that are being sent to the same node / actor, he'll workRequest multiple times. We're only interested
        * in the last workRequest he submitted, because then we know at the time of the last submission his execution queue
        * was complete
        * 因为有些节点不止一次收到workRequest的权限(例如，如果我们快进发送到同一节点/参与者的多个事务，则他将多次接受workRequest。
        * 我们只对他提交的最后一个workRequest感兴趣，因为我们知道在最后一次提交时他的执行队列是完整的
        */
      //啥意思呢，当前子节点某nodeActor发送过来了一条消息吧，MasterActor先检查一下自己的递交任务队列里面有没有来自该节点的其他任务
      numberOfWorkRequests.get(workRequest.fromActor) match {
        // discard all workRequests but the last one
        //丢弃除最后一个工作请求之外的所有工作请求

        case Some(int) if int > 0 =>
          numberOfWorkRequests += (workRequest.fromActor -> (int - 1)) //测试过，就是把来源Actor的任务数-1，why

        //其他情况
        case _ =>
          // the last workRequest goes on
          //最后一个工作请求仍在继续
          currentNodeActors += workRequest.fromActor
          //断言:不应多次收到任何workRequest
          //表达式 assert(condition,会抛出含有指定explanation作为说明的AssertionError) 将在condition条件不成立的时候抛出 AssertionError。条件不成立，会抛出含有指定explanation作为说明的AssertionError
          //numberOfWorkRequests没有记录该Actor，但是workRequests里面却有该任务
          assert(!workRequests.contains(workRequest), "no workRequest should be received more than once")
          workRequests += workRequest //添加该任务

          // have all workRequests been collected?
          //是否已收集所有工作请求？
          if (workRequests.size == VConf.numberOfNodes) {
            // number of actors that requested work should be the same of number of nodes (aka each requested work once)
            //请求任务的actors的数量应该与节点的数量相同(也就是每个请求的工作一次)
            assert(
              currentNodeActors.size == VConf.numberOfNodes,
              "number of actors that requested work should be the same of number of nodes (aka each requested work once"
            )
            // assert each requested work once, checked in an alternative way
            //断言每个请求的工作一次，并以另一种方式检查
            assert(workRequests.map(_.fromActor).toSet.size == workRequests.size,
                   "each requested work once, checked in an alternative way")

            // 执行邻居发现 更新邻居表！！！
            //按理来说节点发现功能不应该放到这个地方来跑，但是在这跑很方便
            val priorityWorkRequest = workRequests.head

            //11.判断是否到了更新节点邻居信息的时候了
            if (new org.joda.time.Duration(lastNeighbourDiscoveryDate, priorityWorkRequest.timestamp)
                  .isLongerThan(
                    new org.joda.time.Duration( //第三方时间处理类
                      lastNeighbourDiscoveryDate,
                      lastNeighbourDiscoveryDate.plusSeconds(VConf.neighboursDiscoveryInterval))
                  )) {
              lastNeighbourDiscoveryDate = priorityWorkRequest.timestamp
              discoveryActor ! DiscoveryActions.AnnounceNeighbours //更新周期到了，告诉discoveryActor更新节点邻居
            }

            //22.根据时间判断模拟结束，宣布End
            if (VConf.simulateUntil.isBefore(priorityWorkRequest.timestamp)) {

              //discoveryActor!DiscoveryActions.printBian
              logger.debug(s"Announce End ${self.path} ${priorityWorkRequest.timestamp}")

              nodeRepoActor ! NodeRepoActions.AnnounceEnd

            }
            //33.判断是否是挖块工作
            else if (priorityWorkRequest.executionType == VExecution.ExecutionType.MineBlock) {
              // if mining of a block should be performed,
              // first of all distribute the throughPut of transactions to the nodes for the next interval of mining
              // 如果需要执行块挖掘，则首先将事务吞吐量分配给下一次挖掘间隔的节点
              //currentNodeActors = Random.shuffle(currentNodeActors) //乱序重排,不做乱序重拍了，将交易的发送权限给固定的几个节点
              val actorsVector = currentNodeActors.toVector //动态数组

              // 没有用,是否开启洪泛攻击
//              if (VConf.floodAttackTransactionFee > 0) {
//                logger.debug(s"VConf.floodTransactionPool... ${VConf.floodAttackTransactionPool}")
//
//                (1 to VConf.floodAttackTransactionPool).foreach { _ =>
//                  val randomActorFrom = actorsVector(Random.nextInt(actorsVector.size))
//                  val randomActorTo   = actorsVector(Random.nextInt(actorsVector.size))
//                  val now             = priorityWorkRequest.timestamp
//                  randomActorFrom ! NodeActions.IssueTransactionFloodAttack(
//                    randomActorTo,
//                    now.plusMillis(50)
//                  )
//                }
//              }

              //随机向NodeActor分发请求，在blockTime的时间内，节点们将发布throughPut数量的交易。挖块的时间可以小于或大于blockTime，
              //因此这仅确保每个块的平均交易数和每个blockTime时间内的确切交易数
              logger.debug(s"Transactions are requested ${priorityWorkRequest.timestamp}")

              (1 to VConf.throughPut).foreach { index =>
                //val randomActorFrom = actorsVector(Random.nextInt(actorsVector.size))
                /**
                  * java
                  * Random r = new Random();
                  * num = r.nextGaussian();标准正太
                  * Math.sqrt(b)*random.nextGaussian()+a； 即均值为a，方差为b的随机数
                  */
                var ith = (Random.nextGaussian() * VConf.mu + actorsVector.size / 2).toInt;
                if (ith < 0) {
                  ith = 1;
                }
                val randomActorFrom = actorsVector(ith)
                val randomActorTo   = actorsVector(Random.nextInt(actorsVector.size)) //不用管
                val now             = priorityWorkRequest.timestamp

                randomActorFrom ! NodeActions.IssueTransaction(
                  randomActorTo,
                  now.plusMillis(VConf.blockTime * 1000 / (index + 1))
                )
              }
              // var remainingBlockTime=VConf.blockTime
              // var numberOfTransactions=VConf.throughPutMin+Random.nextInt(1+VConf.throughPutMax-VConf.throughPutMin)
              // (1 to numberOfTransactions).foreach { index =>
              //   val randomActorFrom = actorsVector(Random.nextInt(actorsVector.size))
              //   val randomActorTo   = actorsVector(Random.nextInt(actorsVector.size))
              //   val now             = priorityWorkRequest.timestamp
              //   val nextAdd=Random.nextInt(remainingBlockTime*1000/(numberOfTransactions-index+1))
              //   val nextTime=now.plusMillis(nextAdd)
              //   remainingBlockTime = remainingBlockTime - nextAdd
              //   randomActorFrom ! NodeActions.IssueTransaction(
              //       randomActorTo,
              //       nextTime
              //   )
              // }

              // clear all workRequests 清除所有工作请求
              workRequests = SortedSet.empty
              // let the first actor mine the block in this executable , it would also AnnounceNextWorkRequestAndMine to other actors
              // 让第一个actor挖掘此可执行文件中的块，它还会将NextWorkRequestAndmine声明给其他参与者
              priorityWorkRequest.fromActor ! NodeActions.ProcessNextExecutable(priorityWorkRequest)

            } else if (priorityWorkRequest.executionType == VExecution.ExecutionType.PropagateTransaction) {
              // if propagate transaction as execution type, then collect the workRequests until
              // ExecutionType.PropagateTransaction is in the pipeline and forward all of them and give permission
              // to nodes to execute them. Nodes that are executing them will most likely be receiving transactions
              // from other nodes as well, so we'll need to figure out how many times a node will be voting which is done via numberOfWorkRequests.
              //如果是propagateTransaction类型，则收集workRequests，直到ExecutionType.PropagateTransaction进入管道，然后转发所有请求，并授予节点执行它们的权限。
              //执行它们的节点很可能也会从其他节点接收事务，因此我们需要计算出一个节点将投票多少次，这是通过number OfWorkRequests完成的。

              // Imagine A1 transfer transaction to A2, but A2 also needs to transfer transaction and after that
              // propagate a block. Imagine now A2 transfers transaction and workRequests with propagate block. Later
              // A2 receives the transaction from A1 and now has in the queue on top of propagate block propagate
              // transaction. Luckily, A2's first workRequest will have been discarded because of numberOfWorkRequests > 1
              // and now A2 will be able to correctly workRequest with the most recent peace of executable on top
              // (which is propagate transaction)
              // 假设A1将事务传输到A2，但A2还需要传输事务，然后传播一个块。想象一下，现在A2使用传播块传输事务和workRequests。稍后，A2从A1接收事务，并且现在在队列中的传播块传播事务的顶部具有。
              // 幸运的是，由于number Of WorkRequests>1，A2的第一个workRequest将被丢弃，现在A2将能够在最新的可执行文件(传播事务)上正确地执行workRequest。
              if (detailedLoggingEnabled) {
                logger.debug(s"Transactions are propagated ${priorityWorkRequest.timestamp}")
              }
              //存放转发交易的任务列表
              var propagateWorkRequests: List[VExecution.WorkRequest] = List.empty

              while (workRequests.nonEmpty && workRequests.head.executionType == VExecution.ExecutionType.PropagateTransaction) {
                propagateWorkRequests ::= workRequests.head //workRequests的队头添加到propagateWorkRequests中
                workRequests = workRequests.tail //删除了对头 or 指针后移
              }

              // remove all actors that requested work that are going to receive something, cause they should request work again
              // 删除所有请求工作并将收到某些内容的参与者，因为他们应该再次请求工作
              workRequests = workRequests.filter( //返回其中的workRequest，条件是=>后面的为真
                workRequest =>
                  !propagateWorkRequests
                    .map(_.toActor) //提取出所有到达Actor，形成temp列表
                    .contains(workRequest.fromActor))

              // figure out how many workRequests we'll receive from each actor so that we can discard all of them but
              // the last one
              // 计算出我们将从每个actor收到多少工作请求，这样我们就可以丢弃除最后一个之外的所有请求
              numberOfWorkRequests = Map.empty
              propagateWorkRequests.foreach { workRequest =>
                numberOfWorkRequests.get(workRequest.toActor) match {
                  case Some(int) =>
                    numberOfWorkRequests += (workRequest.toActor -> (int + 1))
                  case _ => numberOfWorkRequests += (workRequest.toActor -> 0)
                }

                numberOfWorkRequests.get(workRequest.fromActor) match {
                  case Some(int) =>
                    numberOfWorkRequests += (workRequest.fromActor -> (int + 1))
                  case _ => numberOfWorkRequests += (workRequest.fromActor -> 0)
                }
              }

              propagateWorkRequests.foreach(workRequest =>
                workRequest.fromActor ! NodeActions.ProcessNextExecutable(priorityWorkRequest))

            } else {
              //如果是其他事情，比如传播一个块，只由1个actor执行，这种情况与交易相比很少发生，所以这里不需要额外的性能提升只需阻塞整个系统，
              //让参与块传播的2个actor完成他们的工作并继续，
              //“From”actor将传播该块，然后强制转换一个新的workRequest，接收actor将接收该块，并在ReceiveBlock中强制转换一个新的workRequest
              if (detailedLoggingEnabled) {
                logger.debug(s"Blocks are propagated ${priorityWorkRequest.timestamp}")
              }
              workRequests = workRequests.tail
              workRequests = workRequests.filter(_.fromActor != priorityWorkRequest.toActor)
              priorityWorkRequest.fromActor ! NodeActions.ProcessNextExecutable(priorityWorkRequest)
            }
          } //if_end (workRequests.size == VConf.numberOfNodes)
      }
  }
}

object MasterActor {
  def props(): Props = Props(new MasterActor())
}
