Associative reinforcement learning
Associative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment.

Deep reinforcement learning
This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning.

Adversarial deep reinforcement learning
Adversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations. While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies.

Fuzzy reinforcement learning
By introducing fuzzy inference in RL, approximating the state-action value function with fuzzy rules in continuous space becomes possible. The IF - THEN form of fuzzy rules make this approach suitable for expressing the results in a form close to natural language. Extending FRL with Fuzzy Rule Interpolation  allows the use of reduced size sparse fuzzy rule-bases to emphasize cardinal rules (most important state-action values).

Inverse reinforcement learning
In inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal.

Safe reinforcement learning
Safe reinforcement learning (SRL) can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes.
Please describe the broad classes of Reinforcement Learning (RL) within Machine Learning and what problems they attempt to address.
Reinforcement Learning is an area of Machine Learning that works in a problem space or environment where the agent or Reinforcement Learning algorithm, uses an understanding of the past test results and the potential paths through the environment to calculate a policy that solves for the goal of a maximized reward.

Given the variety of environments that RL can operate in, there are a range of approaches that the agent can take and are broadly categarized as six different approaches:
1. Associated RL - which combines traditional approaches of Machine Learning classification with the automated learning on random, or stochastically distributed results.
2. Deep RL - which leverages traditional deep neural networks but allows for flexibility and a lack of pre-defined layers and the state space.
3. Adversarial Deep RL - where end results are compared using vulnerabilities in state stored in Deep Neural Networks and attempts to find weaknesses in these models.
4. Fuzzy RL - which leverages a near natural language definition of if-then fuzzy rules which determine the calculation of the value of a result.
5. Inverse RL - which removes the common reward function evaluation part of the model and instead injects a human to observe and evalutate the results.
6. Safe RL - is an approach to defining policies that also takes into account the behavior of the agent in the envionment, such that those agent steps are reasonaable or respect safety constraints.