\chapter{Background}
The history of multi-touch tablets stretches back to the mid-1980's when the Input Research Group at the University of Toronto began developing the first multi-touch tablets \cite{LBS, Mehta}. Since then, due in no small part to the popularity of iOS and Android devices, multi-touch tablets have become ubiquitous in modern life and are being used in all kinds of environments from grocery stores to classrooms.

Multi-touch tablets give users the ability to have as many pointers as they have fingers, which in turn enables users to interact with applications using natural, intuitive gestures. However, this ability adds a layer of complexity for multi-touch interface developers who must develop applications that constantly process and interpret simultaneous, asynchronous touch events. Due to the inherent intricacies of developing multi-touch applications, there have been several attempts to provide API's that make app development as easy as possible while still giving developers the power to make whatever interactions they can imagine.

In this chapter I will first discuss and categorize related work in the field of multi-touch interface development. Second, I will provide background on the Scratch programming language. Finally, I will present the goals we have specifically for Tablet Scratch's multi-touch interaction set.
 
\section{Multi-Touch Interfaces}
There are several tablet programming environments currently available that allow developers to create projects with multi-touch interactivity. However, they all fall into two general categories which directly conflict with our goals. The first category consists of tablet programming environments which are not sufficiently accessible to novices, i.e., have a floor that is too high. The second category consists of tablet programming environments which do not provide enough functionality to allow experienced programmers to pursue more complicated ambitions, i.e., have a ceiling that is too low. 

\subsection{Floor Too High}
The vast majority of widely used multi-touch interfaces are developed in tablet programming environments which allow for great functionality, but are insufficiently accessible to beginners for our purposes. These ``professional" frameworks are the most extreme examples of ``high floor" tablet programming environments and include iOS Developer Library \cite{iOS}, Android SDK \cite{Android}, ActionScript 3.0 \cite{ActionScript}, and JavaScript/HTML5 \cite{JavaScript}. Note that for all of these frameworks, developers are generally writing their code on their desktop computers despite their interactions being designed for tablets.

All of these ``professional" frameworks involve textual programming languages (TPLs), as opposed to graphical programming languages (GPLs). Although TPLs allow experienced programmers to efficiently create whatever they can imagine, TPLs are generally unwelcoming to novices. Since developers must remember what tools are available or look them up in large highly technical specification documents, TPLs make it difficult for beginners to explore. Furthermore, TPLs  force beginners to suffer through waves of demoralizing, frustrating syntax errors before being able to develop even the simplest of programs.

Thus, it may come as little surprise that these professional frameworks handle multi-touch inputs in a manner that allows developers to create as complex user interfaces as they can imagine, but can also be esoteric and difficult for novices. Professional frameworks all handle multi-touch inputs in a way that mirrors how they handle mouse inputs, that is, using a programming approach known commonly as the observer pattern \cite{Gamma}. In these frameworks, every discrete touch action triggers an \emph{event}. An event is a datatype that carries with it some information, generally including an x-coordinate, a y-coordinate, a touch identification index, and an action type, along with several more complicated attributes. The three most basic action types are ``touch down" (for when a finger first makes contact with the screen), ``touch move" (for each time a finger already touching the screen moves to a new discrete location), and ``touch up" (for when a finger ends contact with the screen).

As touch events are generated by user interactions, they are sent to developer-defined \emph{event handlers} to be processed. Developers define an event handler by providing a callback function which takes in a touch event as an argument and processes it according to the information the events provide (see Figure \ref{ADTGestureSample}). While these frameworks provide developers with the ability to process touch inputs in any manner that they want, they are not very accessible to novices, even ignoring the fact that they are TPLs. To make use of the observer pattern, developers must first understand the concept of parameters and functions, and then slog through the oftentimes dense framework-specific documentation to understand what information touch events have, when events are triggered, and how events are triggered. To their credit, most of these professional frameworks also have predefined event handlers for common touch gestures, e.g., tap and swipe, for developers to use and customize, making certain interactions easier to develop.

\begin{figure}
\centering
\includegraphics{images/ADTGestureSample.PNG}
\caption[Android Event Handling Code Sample]
{Here is an Android event handling code sample.}
\label{ADTGestureSample}
\end{figure}

As a response to the complexity and inaccessibility of the professional frameworks, several tablet programming environments have emerged which amiably attempt to make application development more accessible for beginner programmers without limiting what they can create as they gain expertise. 

One of the most successful of these tablet programming environments is Codea \cite{Codea}. Codea uses an augmentation of the scripting language Lua, so it is still a TPL and suffers from the same accessibility hindrances. Unlike professional frameworks, Codea comes with a programming environment that allows developers to code directly on their tablets. Compared to the professional frameworks, Codea significantly facilitates the process of developing multi-touch interactive applications. Although Codea keeps a general observer pattern structure for handling touches, it is greatly simplified. Touch events in Codea are relatively straightforward datatypes. Codea predefines a touch event handler, so all the developer has to do is designate their desired callback function as ``touched" and it will automatically be called for every touch event. Furthermore, Codea adds a global touch event named ``CurrentTouch," that provides touch values for an arbitrarily defined primary touch. The values stored in CurrentTouch can easily be accessed anywhere in the code, making it easy to access needed values when necessary without having to deal with event handlers. While Codea definitely makes it easier for developers to make multi-touch interactive applications, it still requires that they learn the Lua syntax and the general linear flow of Codea applications. 

Like the professional frameworks Codea tries to simplify, Codea unfortunately still suffers from a computational flow that is neither explicit nor natural. The confusion surrounding the observer pattern begins with the event handlers and the callbacks. To a novice, it is not clear what calls the callback and exactly when the callback is called. It's natural, but technically incorrect, to think that the callback is called by the device immediately after the triggering event happens. Since no line in the developer's written code calls the callback, it is also natural, but incorrect, to think that the callback function runs as a separate entity from the rest of code, i.e., it is non-blocking. Thus (to many novice developers' surprise), if one runs an infinite loop in one of the callbacks, no other code in the applications will ever run after that callback is triggered. The reason for this confusion is that callback functions give the illusion of starting a new separate thread that can run simultaneously with the rest of the code, while the truth is that these callbacks are blocking functions. The problem with this illusion is not merely that it is an illusion, but that it leads novices into constructing a mental model that conflicts directly with the reality of the programming environment. 

Another notable attempt to make tablet application development easier is MIT's AppInventor \cite{AppInventor}, which allows developers to make interactive Android applications using a simplified drag-and-drop, graphical programming language (GPL) rather than the professional Java based Android SDK. By being a GPL, AppInventor spares users many of the pains of TPLs, including preventing most syntax errors. However, the friendly, welcoming appearance of this environment can give beginners a false sense of security when implementing multi-touch interactivity. AppInventor still utilizes the almost entirely single-threaded observer pattern, despite furthering the illusion that callback functions are non-blocking, luring novices to fall into the same previously mentioned pitfalls.

Observer pattern frameworks are inherently unintuitive and cause many headaches even for professional software engineers, let alone novices. To quote an Adobe Software Technology Lab presentation \cite{Parent}:

\begin{itemize}
\item $1/3$ of the code in Adobe's desktop applications is devoted to event handling logic;
\item $1/2$ of the bugs reported during a product cycle exist in this code.
\end{itemize}

As a result, there is a small campaign among certain professional programmers to try to move away from the observer pattern in favor of what is known as reactive programming \cite{Maier, vanderPloeg}. One notable example of such a framework that handles touch inputs is the relatively new textual, functional reactive programming language Elm \cite{Elm}. In Elm, mouse events and touch events are replaced by \emph{signals} such as ``Mouse.position" and ``Touch.touches," which always have a value which other parts of the code can persistently depend on. In theory, functional programming languages such as Elm are more intuitive due to their consistency and completeness. However, in practice, many people (professional programmers included) find it extraordinarily difficult to code without discrete states or mutable data.

\subsection{Ceiling Too Low}
At the other end of the spectrum are tablet programming environments that allow even the most inexperienced programmers to add at least some multi-touch interactivity to their projects. However, the utilities of these environments are so simplistic and weak that they prevent experienced programmers from going beyond the most basic of applications. Many of these programming environments are aimed towards children younger than our targeted demographic and, as a result, try to simplify application development as much as possible. 

Two current examples of such tablet programming environments include Hopscotch \cite{Hopscotch} and ScratchJr \cite{ScratchJr}. Both environments are drag-and-drop graphical programming languages that are highly inspired by Scratch, with the latter being developed in part by some of the core creators of Scratch. Furthermore, both environments allow users to develop directly on their tablets. For simplicity, multi-touch interactivity in these environments is limited to allowing objects to recognize when they or the background are tapped and running developer-designated code as a response. In a sense, multi-touch interactivity in these environments is implemented in a simplified event listener/handler framework. However, these frameworks are distinguished from the floor-too-high frameworks, because their callback functions are run as separate threads, i.e., are non-blocking. Thus, callbacks with infinite loops are not a problem and run simultaneously with the rest of the code.

While these environments make it extraordinarily easy for novices to develop simple touch interactions, such as buttons, they make it essentially impossible to do anything that is much more complicated, like a touch-controlled Pong game or a drawing application. In the interest of simplicity and accessibility, these environments deny developers direct access to the exact locations of touches and preclude developers from writing scripts that can discover when touches begin and end. These limitations are fine for young children first exploring the world of digital creation, but can be constraining for older, more experienced, and more ambitious developers. Oversimplified frameworks constrain developers not only in terms of complexity (i.e., they lower the ceiling), but also in terms of project diversity (i.e., they narrow the walls). If the only touch gesture that is recognized in the framework is a tap, then all touch interactions are going to be taps.

\section{Scratch Programming Language}

In order to achieve low floors, wide walls, and high ceilings, the Scratch programming language was carefully designed to minimize complexity while preserving versatility and power. Three of the most important aspects of the Scratch programming language's design are its minimalist block syntax, its simple sprite object model, and its intuitive approach to multi-threading \cite{Maloney}.


\subsection{Block Syntax}

Scratch's simplicity begins with its elegant minimalistic block syntax. All Scratch scripts are created solely by connecting blocks like puzzle pieces. Each block belongs to one of only five different block types (\emph{command}, \emph{function}, \emph{control structure}, \emph{trigger}, and \emph{definition}) and is appropriately shaped to suggest how it can be used (as can be seen in Figure \ref{BlockSamples}) \cite{Maloney}.

\begin{figure}
\centering
\includegraphics{images/BlockSamples.PNG}
\caption[Samples of Scratch's Five Block Types]
{From left-to-right are samples of \emph{command}, \emph{function}, \emph{control structure}, \emph{trigger}, and \emph{definition} blocks. }
\label{BlockSamples}
\end{figure}

To begin, \emph{command} blocks can be thought of simply as action instructions that can vary from ``move (10) steps" to ``set video transparency to (50)\%." They generally have a notch on top and a corresponding bump on the bottom so they can be stacked on top of each other to form a set of instructions, appropriately called a \emph{stack}. Once a particular stack is initiated, the command blocks in the stack are actuated in order from top to bottom (as users would naturally assume).

Meanwhile, we can think of \emph{function} blocks as being values. These values can be straightforward like ``mouse down?" or more complex, like the the sum of two other function blocks. Unlike other blocks, function blocks have no notches or bumps and thus cannot be stacked on top of each other. Function blocks are either rounded or hexagonal to signify their type. Rounded function blocks can be either strings or numbers, depending on context, while hexagonal blocks are Booleans. 

Many blocks, of all types, have customizable inputs or \emph{parameter slots}. Parameter slots come in different shapes specifying the types they accept. For example, rounded parameter slots with white backgrounds can be filled in either by the user typing in the values or by inserting a rounded function block. On the other hand, colored hexagonal parameter slots can only take in hexagonal function blocks. Since many function blocks can take in parameters that they themselves can fill, it is possible to recursively nest them as many times as desired. These shape specifications make it clear to the user how the blocks can be arranged and prevent users from connecting blocks in a way that is meaningless.

Next, a \emph{control structure} block can be thought of as a command block that takes in stacks of other command blocks and executes them according to some logic. For example, there are ``if $\left\langle \emph{boolean function block} \right\rangle$  then" command blocks that when called only execute the nested command stack if the given $\left\langle \emph{boolean function block} \right\rangle$ is true. Control structure blocks are ``C" and ``E" shaped with appropriate notches and bumps that make it clear how to nest stacks within them.

\emph{Trigger} blocks can be thought of as links between events and the stacks of command blocks that are to be executed when the events happen. The shape of these blocks can be described as block hats since they only have a bump on the bottom and no notch on top. This shape lets users know that stacks can be connected under the trigger blocks, and that the stacks will be executed once their connected trigger block's associated event occurs.

Lastly, \emph{definition} blocks can be thought of as a special kind of trigger block whose triggering event is a user-defined command block. Since these user-defined command blocks can have parameter slots, definition blocks come with input function blocks which can only be used within the definition block's stack. These input function blocks take whatever value was passed into the user-defined command block that triggered the execution.

\subsection{Sprite Object Model}
In Scratch, each sprite maintains its own set of variables and its own set of scripts. Although one sprite is able to access the variable values of another sprite, no sprite can directly alter another sprite's variables. Similarly, although any sprite can broadcast messages for other sprites to act upon, no sprite can directly execute another sprite's scripts. This level of encapsulation makes any particular sprite's behavior significantly easier to understand, since all of the necessary information is contained within its scripts \cite{Maloney}.

While the inability of sprites to ``share code" with one another can lead to otherwise unnecessary code copying, it does not preclude any functionality. As a result, the sprite object model successfully lowers the floor without strictly lowering the ceiling (even though it might make certain complex tasks a little more difficult).

\subsection{Multi-threading}
Multi-threading, i.e., concurrent command processing, is often a more natural way for people to think about real-world behaviors than one at a time command processing. For example, consider when someone wants to tell a person how to walk down the street while chewing gum at the same time. Most people would first tell that person how to walk and how to chew gum individually and then tell the person to do both at the same time, which is to process two commands simultaneously. On the other hand, if people could only process one command at a time, it would be necessary to tell the person to merge the two activities together in a much more complicated way, like ``take right step, bite down, take left step, open mouth."

Despite all this, multi-threading in most programming languages is generally considered an advanced, perplexing feature, reserved for only the most experienced programmers. However, multi-threading is an essential aspect of Scratch and even the newest of programmers quickly pick it up \cite{Maloney}. In Scratch, each executed stack in a Scratch's scripts will automatically run concurrently. As a result, a Scratcher could code the behavior in the previously mentioned example by creating two stacks, one handling walking and another handling chewing gum and then have them both execute on the same event.

While race conditions have been the bane of many programmers who code multiple-threaded applications, they are generally automatically avoided in Scratch. In Scratch, thread switches only take place at wait commands and at the end of loops \cite{Maloney}, rather than between any two commands (as is the case for most other programming languages). As a result, all individual command stacks are executed in their entirety, thus following most Scratchers' intuitions. Scratch is still prone to certain race conditions, particularly as Scratchers make more and more complex projects where the timing of code execution becomes crucial. Luckily, Scratch is set up in a way where advanced Scratchers can easily create their own synchronization locks in order to gain near-complete control of thread switching.

Scratch's handling of concurrency is truly emblematic of the low floors, high ceiling philosophy. Scratchers who are new to programming generally do not know enough to even worry about race conditions; hence Scratch's handling of concurrency automatically prevents race condition from troubling them in nearly all simple cases. However, as  novice Scratchers become more and more experienced and ambitious, they are more likely to run into race conditions. Fortunately, by the time they run into race conditions they will be experienced enough to figure out how to address them.

\section{Goals for the Multi-Touch Interaction Set in Tablet Scratch}
As the reader might imagine, our main goal for multi-touch interactivity is to simultaneously have as low floors, as wide walls, and as high ceilings as possible. Unfortunately, low floors and high ceilings are often achieved at the expense of one other, making it necessary to further specify priorities. 

Following the values behind the design of the original Scratch, we are placing a higher priority on having low floors than on having high ceilings. Any design that does not allow beginners to painlessly create the simplest touch interactions will be considered an abject failure, no matter how high the ceiling is. That being said, any design will be considered undesirable if it simplifies multi-touch interactivity to the point that many complex interactions are more than just difficult, but actually impossible to implement. On a similar note, an important aspect of the Scratch philosophy is not only to embrace the diversity of projects, but to actively encourage it. Thus, the blocks we develop for multi-touch interaction must lend themselves to being used in a wide variety of ways.

In addition to following the high-level ideals of Scratch, it is important that multi-touch interactivity be designed with some, if not all, of the lower-level principles as well. Perhaps the most important of these principles is that there are no error messages. A Scratcher's code might not always run exactly the way they want it to, but it will always run. Furthermore, the disparity between how the code runs and the Scratcher's desired outcome helps teach the Scratcher what must be changed. Hence, the multi-touch interaction set must be designed in a way that makes it impossible to have a syntax error. Another key Scratch principle is to make the interaction set as minimal as possible. In order to preserve explorability and limit confusion amongst beginners, the multi-touch interaction set must be designed to limit redundancy wherever possible. Finally, Scratch's design philosophy prioritizes simplicity over 100\% correctness. If an interaction set is simple and intuitive, but only works in 95\% of use cases, it is still preferable to an overly complicated interaction set that works in all use cases.

As mentioned earlier, many of the principles that are being considered for the design of the multi-touch interaction set are directly at odds with one another, meaning that even the best designs will exhibit certain trade-offs. Despite these necessary concessions, there are multiple strong, potential designs for the multi-touch interaction set, all highlighting different principles over others.
