<template>
    <div class="FCN-container">
        <h1>全连接神经网络</h1>
        <el-row gutter="20">
            <el-col :span="10">
                <el-card class="box-card" header="神经网络关注点">
                    <el-text>
                        我们回顾多元线性模型的步骤：
                        <br><br>
                        1. 创建自定义类：LinearDataset、LinearRegression。<br>
                        2. 准备数据集、数据加载器（均包括训练与测试）、模型、损失函数、优化方法类对应的实例，共7个对象：<br>
                        &nbsp;&nbsp;&nbsp;&nbsp;2.1 数据集/加载器：LinearDataset、paddle.io.DataLoader。<br>
                        &nbsp;&nbsp;&nbsp;&nbsp;2.2 训练模型：LinearRegression，数学公式为：<br>
                        <MarkdownRenderer :content="yourMarkdownText[0]" />
                        &nbsp;&nbsp;&nbsp;&nbsp;2.3 损失函数：paddle.nn.MSELoss，数学公式为：<br>
                        <MarkdownRenderer :content="yourMarkdownText[1]" />
                        &nbsp;&nbsp;&nbsp;&nbsp;2.4 优化方法：paddle.optimizer.SGD，数学公式为：<br>
                        <MarkdownRenderer :content="yourMarkdownText[2]" />
                        &nbsp;&nbsp;&nbsp;&nbsp;其中η指的是学习率。<br><br>
                        3. 训练模型：<br>
                        &nbsp;&nbsp;&nbsp;&nbsp;3.1 进入训练模式：model.train()。<br>
                        &nbsp;&nbsp;&nbsp;&nbsp;3.2 按batch_id遍历数据集，获取输入features和标签labels。<br>
                        &nbsp;&nbsp;&nbsp;&nbsp;3.3 前向计算：pred = model(features)。<br>
                        &nbsp;&nbsp;&nbsp;&nbsp;3.4 计算损失函数：loss = loss_fn(pred, labels)。<br>
                        &nbsp;&nbsp;&nbsp;&nbsp;3.5 反向传播：loss.backward()。<br>
                        &nbsp;&nbsp;&nbsp;&nbsp;3.6 更新模型参数：optimizer.step()。<br>
                        &nbsp;&nbsp;&nbsp;&nbsp;3.7 清空梯度：optimizer.clear_grad()。<br>
                        4. 测试模型。<br>
                    </el-text>
                    <el-divider>神经网络关注点</el-divider>
                    <el-text>
                        在训练模型和测试模型中，全连接神经网络模型与多元线性回归模型几乎一致，（如果均为回归模型）
                        因为训练模型和测试模型中，数据集的输入features和标签labels的形状几乎相同。<br>
                        但是，在创建自定义类与准备对应类的实例会有所不同，尤其是模型的创建和优化方法的选取。<br>
                        FullyConnectedNet里面__init__了多少层，就要forward()多少层。
                    </el-text>
                    <pre class="code-block"><code>class FullyConnectedNet(paddle.nn.Layer):
    def __init__(self, input_dim, hidden_dim1, hidden_dim2, output_dim=1):
        super().__init__()
        self.fc1 = paddle.nn.Linear(input_dim, hidden_dim1)
        self.relu1 = paddle.nn.ReLU()
        self.fc2 = paddle.nn.Linear(hidden_dim1, hidden_dim2)
        self.relu2 = paddle.nn.ReLU()
        self.fc3 = paddle.nn.Linear(hidden_dim2, output_dim)

    def forward(self, x):
        x = self.fc1(x)
        x = self.relu1(x)
        x = self.fc2(x)
        x = self.relu2(x)
        x = self.fc3(x)
        return x</code></pre>
                    <el-text>
                        你需要指定每个隐藏层需要多少神经元，
                        比如上面的input_dim为4，hidden_dim1为128，hidden_dim2为64，
                        那么该模型有4个输入神经元，128+64个隐藏神经元，1个输出神经元。
                    </el-text>
                    <br><br>
                    <el-text>
                        对于优化方法，采用了Adam优化方法：
                    </el-text>
                    <pre class="code-block"><code>optimizer_fcnn = paddle.optimizer.Adam(learning_rate=LEARNING_RATE, parameters=model_fcnn.parameters())</code></pre>
                    <el-text>
                        Adam优化方法比SGD多了三个超参数：β_1、β_2和ε。其实际的学习率为：
                        <markdown-renderer :content="yourMarkdownText[3]" />
                        Adam优化方法更自动化，适合快速实验和非凸优化问题，但可能需谨慎早停以防止过拟合。
                    </el-text>
                </el-card>
            </el-col>
            <el-col :span="14">
                <el-card class="box-card" header="全连接神经网络代码">
                    <pre class="code-block"><code>import paddle
from paddle.io import Dataset, DataLoader
import pandas as pd
import numpy as np
import os

N_FEATURES_FCNN = 4
N_SAMPLES_TRAIN = 1000
N_SAMPLES_TEST = 200

class FCNNDataset(Dataset):
    def __init__(self, csv_file, n_features):
        self.data = pd.read_csv(csv_file)
        self.features = self.data.iloc[:, :n_features].values.astype('float32')
        self.labels = self.data.iloc[:, n_features].values.astype('float32')

    def __getitem__(self, idx):
        return paddle.to_tensor(self.features[idx]), paddle.to_tensor(self.labels[idx]).unsqueeze(-1)

    def __len__(self):
        return len(self.data)

class FullyConnectedNet(paddle.nn.Layer):
    def __init__(self, input_dim, hidden_dim1, hidden_dim2, output_dim=1):
        super().__init__()
        self.fc1 = paddle.nn.Linear(input_dim, hidden_dim1)
        self.relu1 = paddle.nn.ReLU()
        self.fc2 = paddle.nn.Linear(hidden_dim1, hidden_dim2)
        self.relu2 = paddle.nn.ReLU()
        self.fc3 = paddle.nn.Linear(hidden_dim2, output_dim)

    def forward(self, x):
        x = self.fc1(x)
        x = self.relu1(x)
        x = self.fc2(x)
        x = self.relu2(x)
        x = self.fc3(x)
        return x

train_dataset_fcnn = FCNNDataset('fcnn_train.csv', n_features=N_FEATURES_FCNN)
test_dataset_fcnn = FCNNDataset('fcnn_test.csv', n_features=N_FEATURES_FCNN)

BATCH_SIZE = 64 # Can be tuned
train_loader_fcnn = DataLoader(train_dataset_fcnn, batch_size=BATCH_SIZE, shuffle=True)
test_loader_fcnn = DataLoader(test_dataset_fcnn, batch_size=BATCH_SIZE)

HIDDEN_DIM1 = 128
HIDDEN_DIM2 = 64
model_fcnn = FullyConnectedNet(input_dim=N_FEATURES_FCNN, 
                               hidden_dim1=HIDDEN_DIM1, 
                               hidden_dim2=HIDDEN_DIM2, 
                               output_dim=1)

loss_fn_fcnn = paddle.nn.MSELoss()
LEARNING_RATE = 0.001
optimizer_fcnn = paddle.optimizer.Adam(learning_rate=LEARNING_RATE, parameters=model_fcnn.parameters())

EPOCHS = 50 # More epochs might be needed for NNs

print("\nStarting FCNN Training...")
for epoch in range(EPOCHS):
    model_fcnn.train() # Set model to training mode
    total_train_loss = 0
    
    for batch_id, (features, labels) in enumerate(train_loader_fcnn):
        predicts = model_fcnn(features)
        loss = loss_fn_fcnn(predicts, labels)
        loss.backward()
        optimizer_fcnn.step()
        optimizer_fcnn.clear_grad()
        total_train_loss += loss.item()
        
    avg_train_loss = total_train_loss / len(train_loader_fcnn)
    
    if (epoch + 1) % 10 == 0: # Print every 10 epochs
        print(f'FCNN Epoch [{epoch+1}/{EPOCHS}], Train Loss: {avg_train_loss:.4f}')

model_fcnn.eval() # Set model to evaluation mode
total_test_loss = 0
with paddle.no_grad(): # Disable gradient calculations for evaluation
    for features, labels in test_loader_fcnn:
        predicts = model_fcnn(features)
        test_loss = loss_fn_fcnn(predicts, labels)
        total_test_loss += test_loss.item()

avg_test_loss = total_test_loss / len(test_loader_fcnn)
print(f'\nFCNN Final Test Loss: {avg_test_loss:.4f}')</code></pre>
                </el-card>
            </el-col>
        </el-row>
    </div>
</template>

<script>
import MarkdownRenderer from '../../components/MarkdownRenderer.vue'
export default {
  components: { MarkdownRenderer },
    data() {
        return {
            yourMarkdownText: [
                '$$y = w_1x_1 + w_2x_2 + w_3x_3 + b$$',
                '$$L = \\frac{1}{2}(y - y_p)^2$$',
                '$$w_1 = w_1 - \\eta \\frac{\\partial L}{\\partial w_1}$$',
                '$$ \\frac{\\eta}{\\sqrt{\\hat{v}_t} + \\epsilon} $$'
            ]
        }
    }
}
</script>

<style>
.box-card + .box-card {
    margin-top: 20px;
}

/* 代码块样式 */
.code-block {
    background-color: #f5f7fa;
    border-radius: 4px;
    padding: 16px;
    margin: 8px 0;
    overflow-x: auto; /* 横向滚动条 */
    font-family: 'Consolas', 'Courier New', monospace; /* 等宽字体 */
    font-size: 14px;
    line-height: 1.5;
    color: #303133;
    border: 1px solid #ebeef5;
}

/* 保留代码格式 */
.code-block code {
    white-space: pre-wrap; /* 保留换行和空格 */
    word-break: break-all;
    font-family: 'Consolas';
}
</style>