/*
循环神经网络层实现
支持序列数据处理和时间依赖性建模
包含前向传播、反向传播和权重更新功能
*/
#include "../include/RecurrentLayer.h"
#include "../../common/include/ActivationFunction.h"
#include "../../common/include/LossFunction.h"
#include <iostream>
#include <stdexcept>
#include <cmath>
#include <random>

using namespace std;

// 辅助函数：随机初始化权重
void randomInit(Tensor<double, 2>& tensor, double scale = 0.01) {
    random_device rd;
    mt19937 gen(rd());
    uniform_real_distribution<double> dist(-scale, scale);
    
    for (int i = 0; i < tensor.getSize(); i++) {
        tensor(i) = dist(gen);
    }
}

void randomInit(Tensor<double, 1>& tensor, double scale = 0.01) {
    random_device rd;
    mt19937 gen(rd());
    uniform_real_distribution<double> dist(-scale, scale);
    
    for (int i = 0; i < tensor.getSize(); i++) {
        tensor(i) = dist(gen);
    }
}

RecurrentLayer::RecurrentLayer(int layerIndex, int inputSize, int hiddenSize, int outputSize, 
                               const string& activationName, const string& lossName) : 
    Layer(layerIndex, LayerType::RECURRENT, activationName, lossName), 
    inputSize(inputSize), hiddenSize(hiddenSize), outputSize(outputSize), learningRate(0.01) {
    
    if (inputSize <= 0 || hiddenSize <= 0 || outputSize <= 0) {
        throw invalid_argument("All dimensions must be greater than 0");
    }
    
    // 初始化权重矩阵
    Wxh = Tensor<double, 2>(hiddenSize, inputSize);
    Whh = Tensor<double, 2>(hiddenSize, hiddenSize);
    Why = Tensor<double, 2>(outputSize, hiddenSize);
    
    // 初始化偏置向量
    bh = Tensor<double, 1>(hiddenSize);
    by = Tensor<double, 1>(outputSize);
    
    // 随机初始化权重和偏置
    randomInit(Wxh);
    randomInit(Whh, 0.1);  // 隐藏层到隐藏层的权重需要更小的初始化
    randomInit(Why);
    randomInit(bh);
    randomInit(by);
    
    // 初始化状态
    hiddenState = Tensor<double, 1>(hiddenSize);
    lastHiddenState = Tensor<double, 1>(hiddenSize);
    
    // 初始化缓存
    inputCache = Tensor<double, 1>(inputSize);
    outputCache = Tensor<double, 1>(outputSize);
    
    // 初始化梯度缓存
    dWxh = Tensor<double, 2>(hiddenSize, inputSize);
    dWhh = Tensor<double, 2>(hiddenSize, hiddenSize);
    dWhy = Tensor<double, 2>(outputSize, hiddenSize);
    dbh = Tensor<double, 1>(hiddenSize);
    dby = Tensor<double, 1>(outputSize);
    
    // 创建激活函数和损失函数
    this->activation = ActivationFactory<double, 1>::create(activationName);
    this->loss = LossFactory::create(lossName);
}

RecurrentLayer::~RecurrentLayer() {
}

void RecurrentLayer::resetState() {
    // 重置隐藏状态为零
    for (int i = 0; i < hiddenSize; i++) {
        hiddenState(i) = 0.0;
        lastHiddenState(i) = 0.0;
    }
}

void RecurrentLayer::setHiddenState(const Tensor<double, 1>& state) {
    if (state.getSize() != hiddenSize) {
        throw invalid_argument("State size must match hidden size");
    }
    hiddenState = state;
    lastHiddenState = state;
}

const Tensor<double, 1>& RecurrentLayer::getHiddenState() const {
    return hiddenState;
}

Tensor<double, 1> RecurrentLayer::forward(const Tensor<double, 1>& input) {
    if (input.getSize() != inputSize) {
        throw invalid_argument("Input size must match layer input size");
    }
    
    // 保存输入缓存
    inputCache = input;
    
    // 计算新的隐藏状态: h_t = tanh(Wxh * x_t + Whh * h_{t-1} + bh)
    Tensor<double, 1> newHiddenState(hiddenSize);
    
    // Wxh * x_t
    for (int i = 0; i < hiddenSize; i++) {
        double sum = 0.0;
        for (int j = 0; j < inputSize; j++) {
            sum += Wxh(i, j) * input(j);
        }
        newHiddenState(i) = sum;
    }
    
    // + Whh * h_{t-1}
    for (int i = 0; i < hiddenSize; i++) {
        double sum = 0.0;
        for (int j = 0; j < hiddenSize; j++) {
            sum += Whh(i, j) * lastHiddenState(j);
        }
        newHiddenState(i) += sum;
    }
    
    // + bh
    for (int i = 0; i < hiddenSize; i++) {
        newHiddenState(i) += bh(i);
    }
    
    // 应用激活函数
    for (int i = 0; i < hiddenSize; i++) {
        newHiddenState(i) = activation->activate(newHiddenState(i));
    }
    
    // 更新隐藏状态
    lastHiddenState = hiddenState;
    hiddenState = newHiddenState;
    
    // 计算输出: y_t = Why * h_t + by
    Tensor<double, 1> output(outputSize);
    for (int i = 0; i < outputSize; i++) {
        double sum = 0.0;
        for (int j = 0; j < hiddenSize; j++) {
            sum += Why(i, j) * hiddenState(j);
        }
        output(i) = sum + by(i);
    }
    
    // 保存输出缓存
    outputCache = output;
    
    return output;
}

void RecurrentLayer::backward(const Tensor<double, 1>& target, double learningRate) {
    if (target.getSize() != outputSize) {
        throw invalid_argument("Target size must match output size");
    }
    
    // 计算输出层的误差
    Tensor<double, 1> outputError(outputSize);
    for (int i = 0; i < outputSize; i++) {
        outputError(i) = outputCache(i) - target(i);
    }
    
    // 计算Why的梯度
    for (int i = 0; i < outputSize; i++) {
        for (int j = 0; j < hiddenSize; j++) {
            dWhy(i, j) = outputError(i) * hiddenState(j);
        }
    }
    
    // 计算by的梯度
    for (int i = 0; i < outputSize; i++) {
        dby(i) = outputError(i);
    }
    
    // 计算隐藏层的误差
    Tensor<double, 1> hiddenError(hiddenSize);
    for (int i = 0; i < hiddenSize; i++) {
        double sum = 0.0;
        for (int j = 0; j < outputSize; j++) {
            sum += Why(j, i) * outputError(j);
        }
        hiddenError(i) = sum;
    }
    
    // 计算tanh的导数: 1 - tanh^2(x)
    Tensor<double, 1> tanhDerivative(hiddenSize);
    for (int i = 0; i < hiddenSize; i++) {
        double h = hiddenState(i);
        tanhDerivative(i) = 1.0 - h * h;
    }
    
    // 计算隐藏层的delta
    Tensor<double, 1> hiddenDelta(hiddenSize);
    for (int i = 0; i < hiddenSize; i++) {
        hiddenDelta(i) = hiddenError(i) * tanhDerivative(i);
    }
    
    // 计算Wxh的梯度
    for (int i = 0; i < hiddenSize; i++) {
        for (int j = 0; j < inputSize; j++) {
            dWxh(i, j) = hiddenDelta(i) * inputCache(j);
        }
    }
    
    // 计算Whh的梯度
    for (int i = 0; i < hiddenSize; i++) {
        for (int j = 0; j < hiddenSize; j++) {
            dWhh(i, j) = hiddenDelta(i) * lastHiddenState(j);
        }
    }
    
    // 计算bh的梯度
    for (int i = 0; i < hiddenSize; i++) {
        dbh(i) = hiddenDelta(i);
    }
    
    // 更新权重和偏置
    this->learningRate = learningRate;
    updateWeights(learningRate, 0.0);
    updateBiases(learningRate, 0.0);
}

void RecurrentLayer::updateWeights(double learningRate, double momentum) {
    // 更新Wxh
    for (int i = 0; i < hiddenSize; i++) {
        for (int j = 0; j < inputSize; j++) {
            Wxh(i, j) -= learningRate * dWxh(i, j);
        }
    }
    
    // 更新Whh
    for (int i = 0; i < hiddenSize; i++) {
        for (int j = 0; j < hiddenSize; j++) {
            Whh(i, j) -= learningRate * dWhh(i, j);
        }
    }
    
    // 更新Why
    for (int i = 0; i < outputSize; i++) {
        for (int j = 0; j < hiddenSize; j++) {
            Why(i, j) -= learningRate * dWhy(i, j);
        }
    }
}

void RecurrentLayer::updateBiases(double learningRate, double momentum) {
    // 更新bh
    for (int i = 0; i < hiddenSize; i++) {
        bh(i) -= learningRate * dbh(i);
    }
    
    // 更新by
    for (int i = 0; i < outputSize; i++) {
        by(i) -= learningRate * dby(i);
    }
}

const Tensor<double, 2>& RecurrentLayer::getInputWeights() const {
    return Wxh;
}

const Tensor<double, 2>& RecurrentLayer::getHiddenWeights() const {
    return Whh;
}

const Tensor<double, 2>& RecurrentLayer::getOutputWeights() const {
    return Why;
}

const Tensor<double, 1>& RecurrentLayer::getHiddenBiases() const {
    return bh;
}

const Tensor<double, 1>& RecurrentLayer::getOutputBiases() const {
    return by;
}

void RecurrentLayer::print() const {
    cout << "RecurrentLayer [" << inputSize << " -> " << hiddenSize << " -> " << outputSize << "]" << endl;
    cout << "Learning Rate: " << learningRate << endl;
    cout << "Hidden State: ";
    for (int i = 0; i < min(5, hiddenSize); i++) {
        cout << hiddenState(i) << " ";
    }
    if (hiddenSize > 5) cout << "...";
    cout << endl;
}

void RecurrentLayer::setLearningRate(double learningRate) {
    this->learningRate = learningRate;
}