\documentclass{rl}

%\usepackage[ansinew]{inputenc}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}

\begin{document}

% Nr, Abgabedatum, Gruppenleiter, Gruppenname, Name1...Name4
\Abgabeblatt{3}{12.06.2012}{Tutor}{Gruppenname}%
                {Valentin Peka}{Jonas Hansen}{Ghislain Wamo}%

\section*{Problem 1 (Dynamic Programming)}
\subsection*{a)}
In order to apply dynamic programming for a reinforcement learning task, we make sure that it is possible to modele his environment as a Marcov Decision Process MDP. That means, the environment should have a finite state space 
\(S = \{s_1, s_2, ..., s_n\}\) and a finite action space \(A = \{ a_1, a_2, ..., a_n\}\).\\
The environment dynamics should be completly known and are given by:\\
the transition probabilities \( P^a _{ss^\prime} = P\{ S_{t+1} = s^\prime |S_t = s, A_t = a\}\),\\
and the expected immadiate rewards \( R_{ss^\prime}^a = E\{ r_{t+1}|S_t =  s, A_t = a, S_{t+1} = s^\prime\}.\)

\subsubsection*{b) }
The policy iteration is the iterated execution of the policy evaluation and the policy improvement successively, until the policy reaches a stable state.\\
The value iteration is the execution of the policy evaluation, followed by the creation of a policy. The difference between policy iteration and value iteration is, that policy iteration calls repeatively the evaluation and improvement steps considering the changes of the policy after each step.\\

The policy evaluation is the process of computing the state-value function \(V^\pi\)for an arbitrary policy \(\pi\), it's called also prediction problem.\\
Using the policy evaluation, the solution of the Bellman equation \\
\(V^\pi (s) = \sum \limits_a \pi(s,a) \sum \limits_{s^\prime} P^a _{ss^\prime}[R_{ss^\prime}^a + \gamma V^\pi (s^\prime) ]  \)\\
can be given as \( V^\pi = [I- \gamma P^\pi]^{-1}M^\pi \)\\

The policy improvement is the process of making the policy better in respect to the value function of the original policy.\\

 
\section*{Problem 2 (Policy Evaluation and Policy iteration)}
\subsection*{a)}
Zeige, dass fuer ein ergodic und aperiodic Matrix  \( P^\pi \) fuer ein policy \(\pi\), ein Zustandsraum \(S = \{s_1,..., s_n\}\) und \(0<=\gamma <\infty\), die Inverse von \([I-\gamma P^\pi]^{-1}\) immer existiert und ist gegeben durch\\
\begin{gather}
[I-\gamma P^\pi]^{-1} = \sum \limits_{k=0}^{\infty}(\gamma P^\pi)^k,
\end{gather}
Die Bellman Gleichung der State-value function (4.7) aus der Vorlesung \( V^\pi = M^\pi + \gamma P^\pi V^\pi\) kann in die Gleichung
\([I-\gamma P^\pi]V^\pi = M^\pi\) umgeformt werden und die der State-value function ist somit:

\begin{gather}
 V^\pi = [I-\gamma P^\pi]^{-1}M^\pi
\end{gather}
Fuer ein zufaelligen Initial State-value \(V_0\) und eine Iteration über folgende state-value \(V_1, V_2, V_3, ...\) diese Approximation ist gegeben durch die Bellman Gelichung
\begin{gather}
 V^\pi _{k+1} = M^\pi + \gamma P^\pi V^\pi _k 
\end{gather}
Sei der State-space S = \{\( s_0, s_1, s_2, s_3, ...s_n\)\}, laut der vorherigen Gleichung ist:\\
\(V^\pi _1 = M^\pi + \gamma P^\pi V^\pi _0\)\\
\(V^\pi _2 = M^\pi + \gamma P^\pi V^\pi _1\) = \( M^\pi + (\gamma P^\pi)M^\pi + (\gamma P^\pi V^\pi _0)^2 \)\\
\(V^\pi _3 = M^\pi + \gamma P^\pi V^\pi _2\) = \( M^\pi + (\gamma P^\pi)^2M^\pi + (\gamma P^\pi V^\pi _0)^3 \)\\
    .\\
    .\\
    .\\
\(V^\pi _k = M^\pi + \gamma P^\pi V^\pi _{k-1} = \sum \limits_{k=0} ^k (\gamma P^\pi)^k M^\pi + (\gamma P^\pi)^k V^\pi _0 \)\\
betrachtet man die Limit von \(V^\pi _k\) fuer \( k = \infty \), dann ist \( (\gamma P^\pi)^k V^\pi _0 = 0 \) da \( 0 \le \gamma < 1 \)\\
Das heisst dann:\\
\(V^\pi _k = \sum \limits_{k=0} ^k (\gamma P^\pi)^k M^\pi \) ,\\
 \(V^\pi = [I-\gamma P^\pi]^{-1}M^\pi \)\\
Aus den beiden Gleichungen kommt heraus: \([I-\gamma P^\pi]^{-1}M^\pi = \sum \limits_{k=0} ^k (\gamma P^\pi)^k M^\pi\)\\
\(\Rightarrow [I-\gamma P^\pi]^{-1} = \sum \limits_{k=0} ^k (\gamma P^\pi)^k \)
\subsection*{b)}
Gegeben sei die Operatoren:\\
\(B : V^\Pi \rightarrow V^\Pi\) definiert durch: \( B(V^\pi) = max_{\pi^\prime \in \Pi} \{M^{\pi^\prime} + T_{\pi^\prime} V^\pi\}\),\\
und \(Z : V^\Pi \rightarrow V^\Pi\) definiert durch: \( Z(V^\pi) = V^\pi - T^{-1}_{\pi^{\prime \prime}}B(V^\pi).\) \\
Zu zeigen, dass die Folge\( \{V^{\pi_{0}}, V^{\pi_{1}}, V^{\pi_{2}}, ...\}\), die sich in der policy iteration ergibt, sich wie folgend beschreiben laesst:\\ \( V^{\pi_{n+1}} = Z(V^{\pi_{n}})\)\\
\\
Fuer die policy iteration, wir nurtzen die Eigenschaft, dass:\\
\( B(V^{\pi_n}) = max_{\pi^\prime \in \Pi} \{M^{\pi^\prime} + T_{\pi^\prime} V^{\pi_n}\} = M^{\pi_n+1} + T_{\pi_n+1}V^{\pi_n}\) fuer die folgende Iterationsschritt n+1\\
mit T definiert als: \( T_{\pi^\prime} = ( \gamma P^{\pi^\prime} - I) \Rightarrow -T = (I - \gamma P^\pi)\)\\
\\
Laut der Definition des Z-Operators:\\ 
\(Z(V^{\pi_n}) = V^{\pi} - T^{-1}_{\pi^{\prime \prime}}B(V^{\pi})\)\\
\( \Downarrow\)\\
%\begin{align}
\(Z(V^{\pi_n}) =  V^{\pi_n} - T^{-1}_{\pi^{n+1}}B(V^{\pi_n})\)\\
\(  = V^{\pi_n} - T^{-1}_{\pi^{n+1}}(M^{\pi_{n+1}} + T_{\pi_{n+1}}V^{\pi_n}) \)\\
\( = V^{\pi_n} - T^{-1}_{\pi^{n+1}}M^{\pi_{n+1}} - I V^{\pi_n}\)\\
\( = - T^{-1}_{\pi^{n+1}}M^{\pi_{n+1}}\)\\
\( = [I- \gamma P^{\pi_{n+1}}]^{-1}M^{\pi_{n+1}} = V^{\pi_{n+1}}\)\\
Somit kann man diese als \( V^{\pi_{n+1}} = Z(V^{\pi_{n}})\) beschreiben.\\
%\end{align}

\section*{Problem 3 (Point Robot in a Mazeworld)}
\subsection*{a)}
Implementierung der Policy Iteration bestehend aus der Policy Evaluation und dem Policy Improvement:
\begin{verbatim}
from math import *

# Zustands- und Aktionsraum
action = ('F', 'L', 'R')
state = (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15)

# gegebene Policy aus Tabelle 2
policy = [{'F':0,'L':1,'R':0,},{'F':1,'L':0,'R':0},{'F':0,'L':0,'R':1},
{'F':0,'L':0,'R':1},{'F':1,'L':0,'R':0},{'F':0,'L':1,'R':0},{'F':1,'L':0,'R':0},
{'F':0,'L':1,'R':0},{'F':1,'L':0,'R':0},{'F':0,'L':0,'R':1},{'F':0,'L':1,'R':0},
{'F':0,'L':1,'R':0},{'F':0,'L':0,'R':1},{'F':1,'L':0,'R':0},{'F':0,'L':1,'R':0},
{'F':1,'L':0,'R':0}]

# andere Policy für Aufg 3c)
#policy = [{'F':0,'L':0,'R':1,},{'F':0,'L':1,'R':0},{'F':0,'L':0,'R':1},
{'F':1,'L':0,'R':0},{'F':0,'L':1,'R':0},{'F':0,'L':0,'R':1},{'F':1,'L':0,'R':0},
{'F':0,'L':0,'R':1},{'F':0,'L':1,'R':0},{'F':1,'L':0,'R':0},{'F':1,'L':0,'R':0},
{'F':0,'L':1,'R':0},{'F':0,'L':1,'R':0},{'F':0,'L':1,'R':0},{'F':0,'L':0,'R':1},
{'F':1,'L':0,'R':0}]

# Grenzwert zum Abbruch der Policy Evaluation
epsi = 0.0000001      

def policy_evaluation(V_pi_new) :
    delta = 1    
    while delta>epsi :
        V_pi = V_pi_new
        P_pi = [[0 for i in range(len(state))] for i in range(len(state))]
        M_pi = [[0] for i in range(len(state))]
        # Durchlaufen aller Zustände        
        for i in range(len(policy)) :  
            # Durchlaufen aller Folgezustände        
            for j in range(len(state)) :
                # P_pi mit der Summe über alle Aktionen
                P_pi[i][j] = sum(policy[i][action[a]] * P(state[i],state[j],action[a]) \
                for a in range(len(action)))
            # M_pi mit der Summe über alle Aktionen und der Summe über 
	      die jeweiligen Folgezustände
            M_pi[i][0] = sum(policy[i][action[a]] * sum( P(state[i], s_j, action[a]) * \
            R(state[i], s_j) \
            for s_j in range(len(state)) ) for a in range(len(action)))
        # V_pi_new = M_pi + gamma * P_pi * V_pi        
        V_pi_new = add(M_pi, mult(skalMult(0.8,P_pi),V_pi))
        delta = maxSub(V_pi_new,V_pi)
    return V_pi_new
    
def policy_improvement(V_pi) :
    global policy
    new_policy = [{'F':0,'L':0,'R':0} for i in range(len(policy))]
    policy_stable = 1
    # Durchlaufen aller Zustände
    for i in range(len(policy)):
        maxi = 0
        # laut Policy gegebene Aktion suchen für späteren Vergleich
        for al in range(len(action)):
            if policy[i][action[al]] > maxi:
                b = action[al]
        maxActionList = []
        # Value-Werte aller Aktionen in einer Liste speichern
        for a in range(len(action)):
            maxActionList.append( sum( P(state[i], s_j, action[a]) * (R(state[i], s_j) \
            + 0.8*V_pi[s_j][0]) \
            for s_j in range(len(state))))
        # Maximum der Liste suchen und als neuen Policy-Wert des aktuellen Zustands speichern            
        maxAction = max(maxActionList)
        changed = 0     # Nur eine Aktion pro Zustand
        for al in range(len(maxActionList)):
            if maxActionList[al] == maxAction and changed == 0:
                # neue Policy speichern
                policy[i][action[al]]=1
                # aktuelle Aktion speichern für späteren Vergleich
                pi = action[al]           
                changed = 1
            else: policy[i][action[al]]=0
        # alle anderen Aktionen der Policy 0 setzen
        if pi != b : policy_stable = 0    
    return policy_stable
    
def policy_iteration() :    
    V = [[0] for i in range(len(state))]
    stable = 0
    # Iteration der Policy Evaluation und des Policy Improvement bis die Policy stabil ist
    while stable == 0 :
        V_new = policy_evaluation(V)
        V = V_new
        stable = policy_improvement(V)
        print_dictList(policy)
    print "STABLE"
       
def P(s_i, s_j, a) :
    if s_i==0 and s_j==0 and a=='F' : return 1
    elif s_i==0 and s_j==3 and a=='L' : return 1
    elif s_i==0 and s_j==1 and a=='R' : return 1
#------------------------------------------------
    elif s_i==1 and s_j==5 and a=='F' : return 1
    elif s_i==1 and s_j==0 and a=='L' : return 1
    elif s_i==1 and s_j==2 and a=='R' : return 1
#------------------------------------------------
    elif s_i==2 and s_j==2 and a=='F' : return 1
    elif s_i==2 and s_j==1 and a=='L' : return 1
    elif s_i==2 and s_j==3 and a=='R' : return 1
#------------------------------------------------
    elif s_i==3 and s_j==3 and a=='F' : return 1
    elif s_i==3 and s_j==2 and a=='L' : return 1
    elif s_i==3 and s_j==0 and a=='R' : return 1
#------------------------------------------------
    elif s_i==4 and s_j==4 and a=='F' : return 1
    elif s_i==4 and s_j==7 and a=='L' : return 1
    elif s_i==4 and s_j==5 and a=='R' : return 1
#------------------------------------------------
    elif s_i==5 and s_j==9 and a=='F' : return 1
    elif s_i==5 and s_j==4 and a=='L' : return 1
    elif s_i==5 and s_j==6 and a=='R' : return 1
#------------------------------------------------
    elif s_i==6 and s_j==6 and a=='F' : return 1
    elif s_i==6 and s_j==5 and a=='L' : return 1
    elif s_i==6 and s_j==7 and a=='R' : return 1
#------------------------------------------------
    elif s_i==7 and s_j==3 and a=='F' : return 1
    elif s_i==7 and s_j==6 and a=='L' : return 1
    elif s_i==7 and s_j==4 and a=='R' : return 1
#------------------------------------------------
    elif s_i==8 and s_j==8 and a=='F' : return 1
    elif s_i==8 and s_j==11 and a=='L' : return 1
    elif s_i==8 and s_j==9 and a=='R' : return 1
#------------------------------------------------
    elif s_i==9 and s_j==13 and a=='F' : return 1
    elif s_i==9 and s_j==8 and a=='L' : return 1
    elif s_i==9 and s_j==10 and a=='R' : return 1
#------------------------------------------------
    elif s_i==10 and s_j==10 and a=='F' : return 1
    elif s_i==10 and s_j==9 and a=='L' : return 1
    elif s_i==10 and s_j==11 and a=='R' : return 1
#------------------------------------------------
    elif s_i==11 and s_j==7 and a=='F' : return 1
    elif s_i==11 and s_j==10 and a=='L' : return 1
    elif s_i==11 and s_j==8 and a=='R' : return 1
#------------------------------------------------
    elif s_i==12 and s_j==12 and a=='F' : return 1
    elif s_i==12 and s_j==15 and a=='L' : return 1
    elif s_i==12 and s_j==13 and a=='R' : return 1
#------------------------------------------------
    elif s_i==13 and s_j==13 and a=='F' : return 1
    elif s_i==13 and s_j==12 and a=='L' : return 1
    elif s_i==13 and s_j==14 and a=='R' : return 1
#------------------------------------------------
    elif s_i==14 and s_j==14 and a=='F' : return 1
    elif s_i==14 and s_j==13 and a=='L' : return 1
    elif s_i==14 and s_j==15 and a=='R' : return 1
#------------------------------------------------
    elif s_i==15 and s_j==11 and a=='F' : return 1
    elif s_i==15 and s_j==14 and a=='L' : return 1
    elif s_i==15 and s_j==12 and a=='R' : return 1
    else : return 0
    
def R(s_i, s_j) :
    if s_i == s_j : return -1
    elif s_j == 15 : return 1
    else : return 0
    
def zero(m,n):
    new_matrix = [[0 for row in range(n)] for col in range(m)]
    return new_matrix
    
def mult(matrix1,matrix2):
    if len(matrix1[0]) != len(matrix2):
        print 'Matrices must be m*n and n*p to multiply!'
    else:
        new_matrix = zero(len(matrix1),len(matrix2[0]))
        for i in range(len(matrix1)):
            for j in range(len(matrix2[0])):
                for k in range(len(matrix2)):
                    new_matrix[i][j] += matrix1[i][k]*matrix2[k][j]
        return new_matrix
        
def add(matrix1,matrix2):
    if len(matrix1) != len(matrix2) or len(matrix1[0]) != len(matrix2[0]):
        print 'Matrix must be the same length'
    else:
        new_matrix = zero(len(matrix1),len(matrix2[0]))
        for i in range(len(matrix1)):
            for j in range(len(matrix1[0])):
                new_matrix[i][j] = matrix1[i][j] + matrix2[i][j]
        return new_matrix
        
def skalMult(skalar,matrix):
    new_matrix = zero(len(matrix),len(matrix[0]))
    for i in range(len(matrix)):
        for j in range(len(matrix[0])):
            new_matrix[i][j]=matrix[i][j]*skalar
    return new_matrix 
        
def maxSub(matrix1,matrix2):
    max = 0
    if len(matrix1) != len(matrix2) or len(matrix1[0]) != len(matrix2[0]):
        print 'Matrix must be the same length'
    else:
        for i in range(len(matrix1)):
            for j in range(len(matrix1[0])):
                if max < fabs(matrix1[i][j] - matrix2[i][j]) :
                    max = fabs(matrix1[i][j] - matrix2[i][j])
        return max
        
def print_dictList(dicti):
    for i in range(len(dicti)):     
            if dicti[i]['F']==1: print 'F',
            elif dicti[i]['L']==1: print 'L',
            elif dicti[i]['R']==1: print 'R', 
    print " "                      

# Funktionsaufruf für Aufg 3b),3c)
#V = [[0] for i in range(len(state))]
#print policy_evaluation(V)

policy_iteration()
\end{verbatim}
\pagebreak
Implementierung der Value Iteration:
\begin{verbatim}
from math import *

# Zustands- und Aktionsraum
action = ('F', 'L', 'R')
state = (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15)

# gegebene Policy
policy = [{'F':0,'L':1,'R':0,},{'F':1,'L':0,'R':0},{'F':0,'L':0,'R':1},
{'F':0,'L':0,'R':1},{'F':1,'L':0,'R':0},{'F':0,'L':1,'R':0},{'F':1,'L':0,'R':0},
{'F':0,'L':1,'R':0},{'F':1,'L':0,'R':0},{'F':0,'L':0,'R':1},{'F':0,'L':1,'R':0},
{'F':0,'L':1,'R':0},{'F':0,'L':0,'R':1},{'F':1,'L':0,'R':0},{'F':0,'L':1,'R':0},
{'F':1,'L':0,'R':0}]

# Grenzwert zum Abbruch der Policy Evaluation
epsi = 0.0000001  

def value_iteration() :
    global policy
    print_dictList(policy)
    V_pi_new = [[0] for i in range(len(state))]
    delta = 1    
    while delta>epsi :
        V_pi = V_pi_new
        P_pi = [[0 for i in range(len(state))] for i in range(len(state))]
        M_pi = [[0] for i in range(len(state))]
        # Durchlaufen aller Zustände
        for i in range(len(policy)) :  
            # Durchlaufen aller Folgezustände
            for j in range(len(state)) :
                # P_pi mit der Summe über alle Aktionen
                P_pi[i][j] = sum(policy[i][action[a]] * P(state[i],state[j],action[a]) \
                for a in range(len(action)))
            # M_pi mit der Summe über alle Aktionen und der Summe über die 
	      jeweiligen Folgezustände
            M_pi[i][0] = sum(policy[i][action[a]] * sum( P(state[i], s_j, action[a]) * \
            R(state[i], s_j) \
            for s_j in range(len(state)) ) for a in range(len(action)))
        # V_pi_new = M_pi + gamma * P_pi * V_pi
        V_pi_new = add(M_pi, mult(skalMult(0.8,P_pi),V_pi))
        delta = maxSub(V_pi_new,V_pi)
    # Durchlaufen aller Zustände
    for i in range(len(policy)):
        maxActionList = []
        # Value-Werte aller Aktionen in einer Liste speichern
        for a in range(len(action)):
            maxActionList.append( sum( P(state[i], s_j, action[a]) * (R(state[i], s_j) \
            + 0.8*V_pi[s_j][0]) \
            for s_j in range(len(state))))
        # Maximum der Liste suchen und als neuen Policy-Wert des aktuellen
	  Zustands speichern
        maxAction = max(maxActionList)
        changed = 0         # Nur eine Aktion pro Zustand
        for b in range(len(maxActionList)):
            if maxActionList[b] == maxAction and changed == 0:
                policy[i][action[b]]=1
                changed = 1
            else: policy[i][action[b]]=0
    print_dictList(policy)
    
    
def P(s_i, s_j, a) :
    if s_i==0 and s_j==0 and a=='F' : return 1
    elif s_i==0 and s_j==3 and a=='L' : return 1
    elif s_i==0 and s_j==1 and a=='R' : return 1
#------------------------------------------------
    elif s_i==1 and s_j==5 and a=='F' : return 1
    elif s_i==1 and s_j==0 and a=='L' : return 1
    elif s_i==1 and s_j==2 and a=='R' : return 1
#------------------------------------------------
    elif s_i==2 and s_j==2 and a=='F' : return 1
    elif s_i==2 and s_j==1 and a=='L' : return 1
    elif s_i==2 and s_j==3 and a=='R' : return 1
#------------------------------------------------
    elif s_i==3 and s_j==3 and a=='F' : return 1
    elif s_i==3 and s_j==2 and a=='L' : return 1
    elif s_i==3 and s_j==0 and a=='R' : return 1
#------------------------------------------------
    elif s_i==4 and s_j==4 and a=='F' : return 1
    elif s_i==4 and s_j==7 and a=='L' : return 1
    elif s_i==4 and s_j==5 and a=='R' : return 1
#------------------------------------------------
    elif s_i==5 and s_j==9 and a=='F' : return 1
    elif s_i==5 and s_j==4 and a=='L' : return 1
    elif s_i==5 and s_j==6 and a=='R' : return 1
#------------------------------------------------
    elif s_i==6 and s_j==6 and a=='F' : return 1
    elif s_i==6 and s_j==5 and a=='L' : return 1
    elif s_i==6 and s_j==7 and a=='R' : return 1
#------------------------------------------------
    elif s_i==7 and s_j==3 and a=='F' : return 1
    elif s_i==7 and s_j==6 and a=='L' : return 1
    elif s_i==7 and s_j==4 and a=='R' : return 1
#------------------------------------------------
    elif s_i==8 and s_j==8 and a=='F' : return 1
    elif s_i==8 and s_j==11 and a=='L' : return 1
    elif s_i==8 and s_j==9 and a=='R' : return 1
#------------------------------------------------
    elif s_i==9 and s_j==13 and a=='F' : return 1
    elif s_i==9 and s_j==8 and a=='L' : return 1
    elif s_i==9 and s_j==10 and a=='R' : return 1
#------------------------------------------------
    elif s_i==10 and s_j==10 and a=='F' : return 1
    elif s_i==10 and s_j==9 and a=='L' : return 1
    elif s_i==10 and s_j==11 and a=='R' : return 1
#------------------------------------------------
    elif s_i==11 and s_j==7 and a=='F' : return 1
    elif s_i==11 and s_j==10 and a=='L' : return 1
    elif s_i==11 and s_j==8 and a=='R' : return 1
#------------------------------------------------
    elif s_i==12 and s_j==12 and a=='F' : return 1
    elif s_i==12 and s_j==15 and a=='L' : return 1
    elif s_i==12 and s_j==13 and a=='R' : return 1
#------------------------------------------------
    elif s_i==13 and s_j==13 and a=='F' : return 1
    elif s_i==13 and s_j==12 and a=='L' : return 1
    elif s_i==13 and s_j==14 and a=='R' : return 1
#------------------------------------------------
    elif s_i==14 and s_j==14 and a=='F' : return 1
    elif s_i==14 and s_j==13 and a=='L' : return 1
    elif s_i==14 and s_j==15 and a=='R' : return 1
#------------------------------------------------
    elif s_i==15 and s_j==11 and a=='F' : return 1
    elif s_i==15 and s_j==14 and a=='L' : return 1
    elif s_i==15 and s_j==12 and a=='R' : return 1
    else : return 0
    
def R(s_i, s_j) :
    if s_i == s_j : return -1
    elif s_j == 15 : return 1
    else : return 0
    
def zero(m,n):
    new_matrix = [[0 for row in range(n)] for col in range(m)]
    return new_matrix
    
def mult(matrix1,matrix2):
    if len(matrix1[0]) != len(matrix2):
        print 'Matrices must be m*n and n*p to multiply!'
    else:
        new_matrix = zero(len(matrix1),len(matrix2[0]))
        for i in range(len(matrix1)):
            for j in range(len(matrix2[0])):
                for k in range(len(matrix2)):
                    new_matrix[i][j] += matrix1[i][k]*matrix2[k][j]
        return new_matrix
        
def add(matrix1,matrix2):
    if len(matrix1) != len(matrix2) or len(matrix1[0]) != len(matrix2[0]):
        print 'Matrix must be the same length'
    else:
        new_matrix = zero(len(matrix1),len(matrix2[0]))
        for i in range(len(matrix1)):
            for j in range(len(matrix1[0])):
                new_matrix[i][j] = matrix1[i][j] + matrix2[i][j]
        return new_matrix
        
def skalMult(skalar,matrix):
    new_matrix = zero(len(matrix),len(matrix[0]))
    for i in range(len(matrix)):
        for j in range(len(matrix[0])):
            new_matrix[i][j]=matrix[i][j]*skalar
    return new_matrix 
        
def maxSub(matrix1,matrix2):
    max = 0
    if len(matrix1) != len(matrix2) or len(matrix1[0]) != len(matrix2[0]):
        print 'Matrix must be the same length'
    else:
        for i in range(len(matrix1)):
            for j in range(len(matrix1[0])):
                if max < fabs(matrix1[i][j] - matrix2[i][j]) :
                    max = fabs(matrix1[i][j] - matrix2[i][j])
        return max
        
def print_dictList(dicti):
    for i in range(len(dicti)):     
            if dicti[i]['F']==1: print 'F',
            elif dicti[i]['L']==1: print 'L',
            elif dicti[i]['R']==1: print 'R', 
    print " "                      
    
value_iteration()
\end{verbatim}

\subsection*{b)}
Ausgabe der Policy Evaluation der gegebenen Policy aus Tabelle 2, dies ist der Value-Funktion Vektor:
\begin{verbatim}
[[0.0],
[-3.1999996630006677],
[0.0],
[0.0],
[-4.9999996630006684],
[-3.999999663000668],
[-4.9999996630006684],
[-3.999999663000668],
[-4.9999996630006684],
[0.0],
[0.0],
[0.0],
[-3.999999663000668],
[-4.9999996630006684],
[-3.999999663000668],
[0.0]]
\end{verbatim}

\subsection*{c)}
Ausgabe der Policy Evaluation einer anderen Policy, dies ist der Value-Funktion Vektor:
\begin{verbatim}
[[0.0],
[0.0],
[-3.999999663000668],
[-4.9999996630006684],
[0.0],
[-3.999999663000668],
[-4.9999996630006684],
[0.0],
[-3.1999996630006677],
[-0.9983996630006671],
[-4.9999996630006684],
[-3.999999663000668],
[-1.5599996630006672],
[-1.247999663000667],
[-1.5599996630006672],
[-3.1999996630006677]]
\end{verbatim}
Die Ergebnisse der verschiedenen Initial-Policys sind unterschiedlich, da bei der Value Iteration die Policy Evaluation nur einmalig aufgerufen wird, wohingegen bei der Policy Iteration die Policy Evaluation so oft wiederholt aufgerufen wird, bis diese stabil ist, sich also durch das Policy Improvement nicht mehr verändert.

\subsection*{d)}
Die Optimale Policy laut der Policy Iteration ist folgende:\\
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{State}&0&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15\\
\hline
\textbf{Action}&R&F&L&L&R&F&L&L&R&F&L&L&L&L&R&L\\
\hline
\end{tabular}

\subsection*{e)}
Die Optimale Policy laut der Value Iteration ist folgende:\\
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{State}&0&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15\\
\hline
\textbf{Action}&L&L&R&L&L&F&L&F&L&R&L&L&L&L&R&F\\
\hline
\end{tabular}

\end{document}


