
%%interesting interactions as the wiimote+grab button, and the kinect+data glove

	Humanoid robots are becoming more and more complex, many advances in artificial intelligence try to remove the need for constant interaction with robots, but the control through interaction is still crucial for its utility, and it can be made in multiple forms. It is important to have simple and intuitive ways for interacting with the complex robots available today, this is a form not only of controlling the robot but also of communicating with the robot. These interaction techniques can be used as a form of learning by demonstration or helping the robot completing certain tasks.

	On the interaction front, gaming companies are trying to conquer its clients more and more, through appealing and original interaction devices. This interaction devices are high tech devices for a low cost, from their release there is a high expectation, leading to a series of blogs and youtube channels to be created and followed that work as an unusual interaction community gathering point to use this devices in unexpected ways.

	This dissertation tries to join this two elements, the \ac{HRI} needs and the novel devices possibilities, to study what difficulties and involvement users feel when interacting with an humanoid robot such as the iCub.

\section{Research summary}

	The research started with the practical objective of developing a interaction system for the iCub robot using a novel interaction device, the Wiimote bundled with the \ac{MP} extension.

	With this objective in mind research in the typical forms of interaction between the \ac{Wiimote} device and Humans was started by looking at the games available for the Wii gaming console. This led to the initial question of ``how can a gyroscope based device be used in the interaction between humans and humanoid robots?''. From this point further research was done in the \ac{HRI} field to understand what already available systems and works there were. It was understood that humanoid robots have many characteristics that are unique to their kind, so any interface developed for a humanoid must consider that. The most similar and interesting works studied were in the field of rehabilitation and how this kind of devices can be used in the study of rehabilitation recovery. Specifically \ac{Wiimote} related papers mostly focused on the accelerometer, something that was adapted to our idea of control with the gyroscope.

	Because the main platform on which the work would be developed was the iCub robot, there was some research related with the iCub software, and some previous works done in interaction with this robot, although very little with this kind of interaction was found about the control of the iCub, or any other complex humanoid robot.

	This year as actually been a special year in gaming interaction devices because, the first body controlled remote control was released by Microsoft, which was quickly made open for the developer community by hackers and researchers. To take advantage of such a novel interaction device the work was made broader in its goals slightly altering the main focus question to ``\textbf{how can novel gaming interfaces compare as an humanoid interface?}''. The research on the Kinect was made in an unorthodox way, mainly because of being a completely new interaction device, almost no published work was available in similar systems, specially in humanoid robots interaction. This led to a interesting partially web supported research, where the main discussions occurred through forums and technology blogs, this option led also to a need of constant re confirmation about the knowledge acquired.

\section{Contributions}

	From the work developed the contributions can be grouped into two categories, a practical contribution, that is translated into the software developed for the work done, and a theoretical contribution, that is represented in this dissertation as the results of the interaction tests made and its conclusions.
	
	The interaction software developed was made having the user tests as main focus, but also with focus on reusability for other purposes. There are several possible improvements to be made that were better understood at the end of this work, but even with does issues still unresolved the software developed as already been used in several different occasions. One of the most recurrent use of the interaction systems as been as a presentation of the iCub capabilities, the reception to this presentations as been quite positive. The presentation was also ported to the Vizzy robot successfully, both the Kinect interaction system as the \ac{Wiimote} interaction system. In addition to this demonstration use, already a few times students using the iCub for development have asked to use the interaction system as a tool for altering the robot posture, due to its simplicity, this use saves time for students from doing non work related tasks, the repositioning of the robot is done typically through the \ac{GUI} interface that as it can be seen from the test results it is the slowest of the systems to perform tasks. Also the Kinect interface as been used with other projects successfully\footnote{\url{http://youtu.be/xsUrt9ccGFs}} enabling grasping a distant object with a data glove, and this first partnerships might lead to other more interesting works. The software developed will be made available to the iCub community through the iCub and \ac{Yarp} repository, to be used in other systems.
	
	The tests made and its results and also the research done helped to understand users feeling about this types of interaction. This understanding can be used in other works as a starting point for developing new systems, straight forward interaction systems, as systems where user intervention might be needed, can benefit from their use. For the \ac{Wiimote} and the Kinect it was the start to get a good interaction system using inexpensive devices, particularly in the case of the Kinect this work explored a brand new device and some of the users reactions to it.

\section{Thesis statement}

	Through the work developed it was possible to understand several important facts about the comparison of this systems.
	
	Known interfaces are better. Although the \ac{GUI} which was the most common interface was not more successful than the others, users felt better using it mostly because they knew what it would do from the start, and would not have to rely on a good understanding of the concept because the concept was already known. The \ac{DPad} control was the most successful precisely because of that, users used it knowing what was going to happen, and what they expected happened, so they were comfortable enough to trust in their own judgment of what the result of the interaction might be. This trust was shown to fail during the Kinect motor skeleton test, where users although understood the concept they were distrustful of what the reaction of the robot might be, taking more time in the first step to check if the interaction corresponded to their expectations. This will be solved when the technology used enters our daily lives, this way users already create an idea of what the logic reaction of an robot might be to that a interface and probably trust it more.
	
	Different interfaces serve different users. There were big differences among users in the test results, in some users it was possible to understand that they adapted better to a certain interface than others. For example one of the users was able, during one of the tests, to get a clear idea about how a certain interface worked, and that interface became the fastest one in that user tests. Although in another test that might be considered easier the same user had a slower result than a user that was not able of understanding the concept of the previous test. Users adapted better to whatever concept they understand better, and the understanding of a concept might depend on the type of user knowledge, for example, understanding the concept of the \ac{Wiimote} motor control involved understanding that the robot was composed of several motors and that the movement of each motor might change the position of other motors.
	
	Users expect clear feedback. Although the new interaction devices tend to be simpler they do not give a clear feedback, to do that it is expected some form of visual feedback. In this case the feedback was the robot reaction, this was not a clear enough feedback because those reactions were most of the time very complex, a graphical simple feedback was suggested by many as a good feedback solution. Although the word ``clear'' also depends on the user. 
	
	Comparing directly typical and novel devices on can assume that, although certain conceptual lines must be followed it is possible to execute a much more complex interaction using the novel devices available, but the typpical interaction system is still preferred by users.

\section{Future work}

	There are many details left open by this thesis that are interesting continuity points. We point out three that we found interesting.
	
	The tests will benefit to be done not only to compare interaction devices but also people reactions to it, this could be studied by doing tests to people with different backgrounds and particularly with different ages. During this work, older users showed a very different response from the younger users, the question of how would child users interact through this devices with the humanoid remains open.
	
	The exploration of possible feedbacks while controlling a humanoid robot is also an interesting study. The humanoid robot should warn users when something is wrong or what is the current state, the way to communicate this to the user can be made through a graphical interface, although to have the robot communicate a problem while being controlled through interaction by itself might be a good useful solution.
	
	Blending interactive control of this novel devices, and robot understanding. If a interaction device instead of controlling the robot directly could suggest actions to the robot, that probably could contribute to a better interaction result. An example would be grabbing a mug, if a user could by mimicking grabbing a certain object make the robot understand the intention of grabbing that object while continuously controlling the robot, the control of the robot could be shared enabling a better success chance.
	