#Ilia Sucholutsky
#August 18, 2013
#This AI use probabilistic approximation with a normal distribution to determine the best number of hunts per round
#The goal of this AI is to have as much food as possible, for that reason, it tries to help reach the Cooperation Threshold (m) when it is realistic to do so without hunting costs becoming greater than the rewards of the bonus.
#Below, 'm' refers to the bonus for enough players cooperating and is essentially the public goods value of this competition.
#It essentially works in 6 steps:
#Step 1 - Calculate the expected number of hunts (not counting the AI itself) for the next round.
#Step 2 - The overall standard deviation for the number of expected hunts is calculated using the approximation (npq)**0.5 where n is number of trials, p is probability of trial occuring, and q is probability of trial not occuring
#Step 3 - For every possible number of hunts the AI can make (from 0 to P-1, inclusive), the Z-score of m - that number is calculated with our previoulsy found expected_hunts acting as the mean
#Step 4 - Probability values are calculated for every z-score to find the probability of there being that many hunts.
#Step 5 - Expected_results for the round are created my multiplying each p_value by the reward it gives and subtracting hunting/slacking costs
#Step 6 - Out of the expected_results, the one that gives the most expected food is chosen and the number of hunts it entails are returned followed by the remaining number of slacks.
#It is important to note that this algorithm is more survivalistic than aggressive, it tries to get the maximum amount of food for itself and does not specifically target others in anyway.
#In essence, the AI acts as a sort of backup for the community, if the community is going to be close to 'm' but slightly below, then the AI comes in and helps them reach it.
#Other than that, it tends to mainly slack which is not that bad in and of itself if we look at a payoff matrix:
# 1/2|  H  |  S  |
#  h | 0,0 |-3,1 |
#  s | 1,-3|-2,-2|
#In this payoff matrix, Player 1 does significantly better if he slacks (1>0 and -2>-3) therefore, at least theoretically slacking is better on a partner-to-partner basis.
#However, this changes when the bonus is introduced: Now hunting becomes better on some occasions if we look at the big picture (the entire round rather than individual partners)
#That is what this AI does, it looks at the round as a whole and sees how many hunts will bring the highest expected return based on probability.
#Occasional hunting especially closer to the end of the game when fewer players remain, allows the AI to have a small reputation boost so it's reputation remains above 0.
#This gives it a slight advantage over those who slack constantly because people are more likely to hunt with it, rather than constant slackers.
def hunt_choices(
                    self,
                    round_number,
                    current_food,
                    current_reputation,
                    m,
                    player_reputations,
                    ):
        import scipy 
        from scipy import stats
        #scipy is later used for conversion of z-scores into probabilities
        p = int(len(player_reputations))+1
        #p is the number of remaining players include the AI itself
        p1 = int(p-1)
        #p1 is number of players not counting the AI
        choices = ['s']*p1
        #choices is currently at the default value to be returned    
        expected_hunts = 0
        variance = 0
        expected_results=[]
        z_scores = []
        #Initialize all the variables for later use
        if round_number > 1:
            #if rhe round is number one, then all the reputations are 0 or non-existent, for that reason the AI only starts using probability in round 2
            
            
            for rep in player_reputations:
                expected_hunts += rep*(p-1)
                #expected number of hunts not counting the AI are calculated
                variance +=(p-1)*rep*(1-rep)
                #the overall variance is calculated (standard deviation squared)
            std_dev = variance**0.5
            #std_dev is the standard deviation
            if std_dev == 0:
                std_dev = 1
                #if the standard deviation is 0 then we set it to the minimal value of 1 to prevent division by 0
            for num in range(p):
            
                z_scores += [((m-num) - expected_hunts)/std_dev]
                #z-scores are calculated for every possible number of hunts the AI can have
                     

            p_values = scipy.stats.norm.sf(z_scores)
            #z-scores are converted to probability values
            for num in range(p):
                expected_results += [p_values[num]*2*p1 - (num*3) - ((p1-num)*2)]
                #all expected round results for every number of hunts the AI can make is calculated
            for num in range(p):
                if max(expected_results)==expected_results[num]:
                    choices = ['h']*num
                    choices += ['s']*(p1-num)
                    #the best expected result is found and the hunts and slacks it entails are returned
        
            
        return choices
        

    def hunt_outcomes(self, food_earnings):
        '''Required function defined in the rules'''
        pass
        

    def round_end(self, award, m, number_hunters):
        '''Required function defined in the rules'''
        pass
#These two functions are not required as all variable work is done in the first function.
