\section{Duplicate-checking}
  \subsection{Breadth First Search}
Using duplicate-checking in Breadth First Search will cut off some branches in the exploration tree. So the exploration will become faster. Anyway, BFS explore the tree level by level, so in that case the algorithm will find a solution with or without these cuts, because of the completeness of this algorithm. Moreover, using duplicate-checking will not change the complexity of the algorithm.

After a test on a 1x2 environment with a dirt cell (the cell where the agent is not initially), we can see that the algorihm expanded 170 nodes with a path cost of 7. If this algorithm expended all node without duplicate-checking, it should have expended at least $(\sum_{n=0}^{6} 4^n) + 1 = 5462$ nodes. So the provided algorithm implement duplicate-checking.

  \subsection{Depth First Search}
Using duplicate-checking in Depth First Search will really improve this algorithm. By exemple, in the vacuum domain, if the actions are always choosen in the same order (starting by turn left), then the algorithm will just going deep (always choosing to turn left) in the tree exploring only tiny posibilities and never try to suck something so it will never go to a goal (except if initialy there's no dirt). With duplicate checking, the algorithm will stop after 4 turn left and then explore another part of the tree and finally found a solution.

If the actions are not always choosen in the same order, then the algorithm might found a goal, but it the algorithm will still explore a enormous tree.

After some test, we can see that the provided DFS algorithms always go in infinite computation, so it does not implement duplicate-checking.

