


\section{违背基本假设的情况}

\textbf{4} 简述用加权最小二乘法消除多元线性回归中异方差的思想与方法。


\begin{proof}[\bf 答]
回顾普通最小二乘法的目标函数：
\begin{equation}
    Q = \sum_{i=1}^{n}(y_{i} - \beta_{0}-\sum_{i=j}^{p}\beta_{j}x_{ij})^{2}
\end{equation}

普通最小二乘法假定误差$\epsilon_{i},~i=1,2,\cdots,n$的方差$\sigma_{i}^{2}=\sigma^{2}$是相同的。故在普通最小二乘法中残差平方$e_{i}^{2}$的权重是相同的，普通最小二乘法对残差平方和中的每一项作等量齐观的处理。当出现异方差的情况时，$Var(\epsilon_{i})=\sigma_{i}^{2}$不再是相同的，平方和中每一项的贡献是不同的。误差项方差$\sigma_{i}^{2}$较大的项在平方和中作用偏大，而方差小的项作用就偏小。直观地，如果我们对回归方程进行变换：
\begin{equation}
    y_{i}/\sigma_{i} = \beta_{0}/\sigma_{i} + \beta_{1}x_{i1}/\sigma_{i} +\cdots+ \beta_{p}x_{ip}/\sigma_{i} + \epsilon_{i}/\sigma_{i}
\end{equation}
记新的误差项$\epsilon_{i}'=\epsilon_{i}/\sigma_{i}$，那么新误差项的方差满足：
\begin{equation}
    Var(\epsilon_{i}^{'}) = 1
\end{equation}
满足等方差的假设。此时目标函数的形式为：
\begin{equation}
    Q = \sum_{i=1}^{n}(y_{i} - \beta_{0}-\sum_{i=j}^{p}\beta_{j}x_{ij})^{2}/\sigma_{i}^{2}
\end{equation}
由于$\sigma_{i}^{2}$事先不知道，故加权最小二乘法的目标函数需对权重进行估计，标准的目标函数的形式为：
\begin{equation}
    Q = \sum_{i=1}^{n}w_{i}(y_{i} - \beta_{0}-\sum_{i=j}^{p}\beta_{j}x_{ij})^{2}
\end{equation}
\end{proof}

\textbf{6} 验证多元加权最小二乘回归系数的估计公式。

\begin{proof}
如\textbf{4}中所叙述的思想，加权最小二乘本质上是对方程进行变换使其满足同方差的假设：
\begin{equation}
    Q = \sum_{i=1}^{n}w_{i}(y_{i} - \beta_{0}-\sum_{i=j}^{p}\beta_{j}x_{ij})^{2} = \sum_{i=1}^{n}(\sqrt{w_{i}}y_{i} - \sqrt{w_{i}}\beta_{0}-\sqrt{w_{i}}\sum_{i=j}^{p}\beta_{j}x_{ij})^{2}
\end{equation}
这等价于对数据进行了变换
\begin{eqnarray}
    \label{tans_y}
    \boldsymbol{y}^{*} = \sqrt{W}\boldsymbol{y}\\ 
    \label{trans_x}
    X^{*} = \sqrt{W}X\\
\end{eqnarray}
此时
\begin{equation}
    y^{*} = X^{*}\boldsymbol{\beta} + \boldsymbol{\epsilon}
\end{equation}
满足，同方差的假定，使用OLS估计可得：
\begin{equation}
    \label{beta_wls}
    \boldsymbol{\beta} = ((X^{*})^{\top}X^{*})^{-1}(X^{*})^{\top}\boldsymbol{y}^{*}
\end{equation}
将\eqref{tans_y}与\eqref{trans_x}代入\eqref{beta_wls}，得
\begin{equation}
\boldsymbol{\beta} = (X^{\top}WX)^{-1}X^{\top}\boldsymbol{y}
\end{equation}
\end{proof}

\textbf{8} 对表4-3数据：

(1) 建立地区生产总值$y$对固定资产投资$x$的普通最小二乘回归，诊断是否存在异方差性。

进行OLS回归与spearman等级相关系数检验。
	\begin{lstlisting}[language=R]
            > x <- c(25669,17885,32070,13050,18128,22247,
                     14777,15386,28179,77388,47251,24408,
                     28811,18499,68024,40472,32665,31551,
                     80855,18318,4053,17741,32935,11777,
                     14788,1151,19400,7200,2572,3169,9650)
            > y <- c(7944,12779,31750,14198,15080,6692,13923,
                     10648,6756,49663,30276,27033,23237,19694,
                     53323,40415,30012,28353,33304,18237,3890,
                     16048,28812,13204,16119,1596,20825,9664,
                     3528,3794,10288)
            
            > model = lm(y~x)
            > cor.test(x,abs(model$residuals),method = "spearman")
	\end{lstlisting}
	
	\begin{lstlisting}
        	        Call:
            lm(formula = y ~ x)
            
            Residuals:
               Min     1Q Median     3Q    Max 
            -17616  -3108    706   4388  12358 
            
            Coefficients:
                         Estimate Std. Error t value Pr(>|t|)    
            (Intercept) 5.143e+03  2.008e+03   2.561   0.0159 *  
            x           5.662e-01  6.274e-02   9.024 6.44e-10 ***
            ---
            Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
            
            Residual standard error: 6908 on 29 degrees of freedom
            Multiple R-squared:  0.7374,	Adjusted R-squared:  0.7283 
            F-statistic: 81.42 on 1 and 29 DF,  p-value: 6.436e-10
            	Spearman's rank correlation rho
            
            data:  x and abs(model$residuals)
            S = 2538, p-value = 0.005838
            alternative hypothesis: true rho is not equal to 0
            sample estimates:
                  rho 
            0.4883065
	\end{lstlisting}
	根据spearman等级相关系数检验的结果，t统计量的观测值为2538，对应概率P值为0.005838<0.01,在1\%水平上拒绝不存在异方差的原假设，即认为存在异方差性。
    
(2) 如果存在异方差性，则建立加权最小二乘回归，分析加权最小二乘回归的效果。
    进行加权最小二乘法回归，假定权函数形式为：$w_{i} = \frac{1}{x^{m}}$
    \begin{lstlisting}
            > # WLS
            > m = c(-2,-1.5,-1,-0.5,0,0.5,1,1.5,2,2.5,3,3.5,4,4.5,5)
            > LOGLIKE = - Inf
            > best_m = -2
            > for(i in m)
            > {
            >   model_temp = lm(y~x, weights = (x)^{-i})
            >   LOGLIKE_c = logLik(model)
            >   if(LOGLIKE_c>LOGLIKE)
            >   {
            >     LOGLIKE = LOGLIKE_c
            >     best_m = i
            >   }
            >   print(i)
            >   print(LOGLIKE_c)
            > }
            
            [1] -2
            'log Lik.' -345.919 (df=3)
            [1] -1.5
            'log Lik.' -337.3552 (df=3)
            [1] -1
            'log Lik.' -329.571 (df=3)
            [1] -0.5
            'log Lik.' -322.7347 (df=3)
            [1] 0
            'log Lik.' -317.0088 (df=3)
            [1] 0.5
            'log Lik.' -312.4262 (df=3)
            [1] 1
            'log Lik.' -308.7733 (df=3)
            [1] 1.5
            'log Lik.' -305.9579 (df=3)
            [1] 2
            'log Lik.' -304.2231 (df=3)
            [1] 2.5
            'log Lik.' -303.9507 (df=3)
            [1] 3
            'log Lik.' -305.5028 (df=3)
            [1] 3.5
            'log Lik.' -309.104 (df=3)
            [1] 4
            'log Lik.' -314.9144 (df=3)
            [1] 4.5
            'log Lik.' -323.0175 (df=3)
            [1] 5
            'log Lik.' -333.0919 (df=3)
    \end{lstlisting}
    上述代码的运行结果中，$m=2.5$时似然函数的取值最大。下面绘制$m=2.5$时的回归直线与OLS回归直线进行比较。
	\begin{figure}[H]
	\centering
	\includegraphics[scale=0.6]{Graph//fit_4_8.png}
	\caption{OLS与WLS拟合直线对比}
	\label{fit_4_8}
	\end{figure}
	如图\ref{fit_4_8} OLS回归直线受到右边部分$x$的影响，直线右半部分被往下拉。而WLS由于对残差平方和进行了加权，提高了对方差较小的项的拟合效果。
	
\textbf{12} 某软件公司的月销售额数据(P126 表4-12)，其中，$x$为总公司的月销售额(万元),$y$某分公司的月销售额(万元)

(1) 用普通最小二乘法建立$y$关于$x$的回归方程。


\begin{lstlisting}
       > x <- c(127.3, 130.0, 132.7, 129.4, 135.0,
                137.1, 141.1, 142.8, 145.5, 145.3,
                148.3, 146.4, 150.2, 153.1, 157.3,
                160.7, 164.2, 165.6, 168.7, 172.0)

       > y <- c(20.96, 21.40, 21.96, 21.52, 22.39,
               22.76, 23.48, 23.66, 24.10, 24.01,
               24.54, 24.28, 25.00, 25.64, 26.46,
               26.98, 27.52, 27.78, 28.24, 28.78)
        
       > model <- lm(y~x)
       > summary(model)
        Call:

        lm(formula = y ~ x)
        
        Residuals:
              Min        1Q    Median        3Q       Max 
        -0.151659 -0.068633 -0.003432  0.046715  0.184384 
        
        Coefficients:
                     Estimate Std. Error t value Pr(>|t|)    
        (Intercept) -1.434832   0.241956   -5.93  1.3e-05 ***
        x            0.176163   0.001632  107.93  < 2e-16 ***
        ---
        Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
        
        Residual standard error: 0.09744 on 18 degrees of freedom
        Multiple R-squared:  0.9985,    Adjusted R-squared:  0.9984 
        F-statistic: 1.165e+04 on 1 and 18 DF,  p-value: < 2.2e-16
\end{lstlisting}

最小二乘法建立的回归模型为：
\begin{equation}
    \hat{y}_{i} = 0.176163 * x_{i} - 1.434832.
\end{equation}

(2) 用残差图和DW检验诊断序列的相关性。
	\begin{figure}[H]
	\centering
	\includegraphics[scale=0.5]{Graph//resplot_4_12.png}
	\caption{残差图}
	\label{resplot_4_12}
	\end{figure}
    
    DW检验代码：
    \begin{lstlisting}
    > library(lmtest)
    > dwtest(model)
    
        Durbin-Watson test

    data:  model
    DW = 0.66325, p-value = 6.284e-05
    alternative hypothesis: true autocorrelation is greater than 0
    \end{lstlisting}
    DW检验的$H_{0}:$随机误差项不存在一阶自相关，备择假设$H_{1}:$随机误差项存在一阶自相关。其中DW统计量的观测值为0.66325，对应的概率P值为6.284e-05。故拒绝原假设，认为随机误差项存在一阶自相关。

(4) 用迭代法处理序列相关，并建立回归方程。
    \begin{lstlisting}
    > dw <-  0.66325
    > rho_hat <- 1 - dw/2
    > y_prime <- y[2:n] - rho_hat* y[1:n-1] 
    > x_prime <- x[2:n] - rho_hat* x[1:n-1]
    > model_co <- lm(y_prime~x_prime)
    > dwtest(model_co)
            Durbin-Watson test

    data:  model_co
    DW = 1.3597, p-value = 0.0431
    alternative hypothesis: true autocorrelation is greater than 0
    \end{lstlisting}
    由原始变量表示的方程为：
    \begin{equation}
        y_{t} = -0.3 + 0.668375 y_{t-1} +  0.1727 x_{t} + 0.1154284
 x_{t-1}
    \end{equation}
    如果再做一次迭代，DW统计量无明显改进，故迭代一次即可。
    \begin{lstlisting}
        > summary(model_co)

        Call:
        lm(formula = y_prime ~ x_prime)
        
        Residuals:
              Min        1Q    Median        3Q       Max 
        -0.101254 -0.049074 -0.000154  0.034796  0.123151 
        
        Coefficients:
                     Estimate Std. Error t value Pr(>|t|)    
        (Intercept) -0.283853   0.167929   -1.69    0.109    
        x_prime      0.172686   0.003475   49.69   <2e-16 ***
        ---
        Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
        
        Residual standard error: 0.06897 on 17 degrees of freedom
        Multiple R-squared:  0.9932,    Adjusted R-squared:  0.9928 
        F-statistic:  2469 on 1 and 17 DF,  p-value: < 2.2e-16
        \end{lstlisting}

(5) 差分法
    \begin{lstlisting}
    > y_prime <- y[2:n] -  y[1:n-1] 
    > x_prime <- x[2:n] -  x[1:n-1]
    > model_diff <- lm(y_prime~x_prime)
    > 
    > dwtest(model_diff)
    
            Durbin-Watson test
    
    data:  model_diff
    DW = 1.4798, p-value = 0.1364
    alternative hypothesis: true autocorrelation is greater than 0
    \end{lstlisting}
    回归方程为:
    \begin{equation}
        y_{t} = y_{t-1} + 0.169 (x_{t} - x_{t-1})
    \end{equation}
    \begin{lstlisting}
    > summary(model_diff)

    Call:
    lm(formula = y_prime ~ x_prime)
    
    Residuals:
          Min        1Q    Median        3Q       Max 
    -0.126529 -0.058215 -0.000915  0.050771  0.140315 
    
    Coefficients:
                Estimate Std. Error t value Pr(>|t|)    
    (Intercept) 0.032892   0.025847   1.273     0.22    
    x_prime     0.160963   0.008243  19.528 4.42e-13 ***
    ---
    Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
    
    Residual standard error: 0.07449 on 17 degrees of freedom
    Multiple R-squared:  0.9573,    Adjusted R-squared:  0.9548 
    F-statistic: 381.3 on 1 and 17 DF,  p-value: 4.42e-13
    \end{lstlisting}
    
(5) 比较各方法所建立回归方程的优良性。

由于本题中，相关系数的估计值约为0.67，并不接近于1，不适合使用差分法。并且迭代法与差分法相比，迭代法的$R^{2}$更高拟合效果更佳。迭代法与OLS相比OLS的误差项方差估计为0.09744，迭代法估计为0.06897，误差项的方差更小。故迭代法最优。