\documentclass[twoside,openright]{uva-bachelor-thesis} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage{hyperref,graphicx,tikz,subfigure,float} % Link colors \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen} % Title Page \title{A generic architecture for gesture-based interaction} \author{Taddeüs Kroes} \supervisors{Dr. Robert G. Belleman (UvA)} \signedby{Dr. Robert G. Belleman (UvA)} \begin{document} % Title page \maketitle \begin{abstract} Applications that use complex gesture-based interaction need to translate primitive messages from low-level device drivers to complex, high-level gestures, and map these gestures to elements in an application. This report presents a generic architecture for the detection of complex gestures in an application. The architecture translates device driver messages to a common set of ``events''. The events are then delegated to a tree of ``event areas'', which are used to separate groups of events and assign these groups to an element in the application. Gesture detection is performed on a group of events assigned to an event area, using detection units called ``gesture tackers''. An implementation of the architecture as a daemon process would be capable of serving gestures to multiple applications at the same time. A reference implementation and two test case applications have been created to test the effectiveness of the architecture design. \end{abstract} % Set paragraph indentation \parindent 0pt \parskip 1.5ex plus 0.5ex minus 0.2ex % Table of content on separate page \tableofcontents \chapter{Introduction} \label{chapter:introduction} Surface-touch devices have evolved from pen-based tablets to single-touch trackpads, to multi-touch devices like smartphones and tablets. Multi-touch devices enable a user to interact with software using hand gestures, making the interaction more expressive and intuitive. These gestures are more complex than primitive ``click'' or ``tap'' events that are used by single-touch devices. Some examples of more complex gestures are ``pinch''\footnote{A ``pinch'' gesture is formed by performing a pinching movement with multiple fingers on a multi-touch surface. Pinch gestures are often used to zoom in or out on an object.} and ``flick''\footnote{A ``flick'' gesture is the act of grabbing an object and throwing it in a direction on a touch surface, giving it momentum to move for some time after the hand releases the surface.} gestures. The complexity of gestures is not limited to navigation in smartphones. Some multi-touch devices are already capable of recognizing objects touching the screen \cite[Microsoft Surface]{mssurface}. In the near future, touch screens will possibly be extended or even replaced with in-air interaction (Microsoft's Kinect \cite{kinect} and the Leap \cite{leap}). The interaction devices mentioned above generate primitive events. In the case of surface-touch devices, these are \emph{down}, \emph{move} and \emph{up} events. Application programmers who want to incorporate complex, intuitive gestures in their application face the challenge of interpreting these primitive events as gestures. With the increasing complexity of gestures, the complexity of the logic required to detect these gestures increases as well. This challenge limits, or even deters the application developer to use complex gestures in an application. The main question in this research project is whether a generic architecture for the detection of complex interaction gestures can be designed, with the capability of managing the complexity of gesture detection logic. The ultimate goal would be to create an implementation of this architecture that can be extended to support a wide range of complex gestures. With the existence of such an implementation, application developers do not need to reinvent gesture detection for every new gesture-based application. Application frameworks for surface-touch devices, such as Nokia's Qt \cite{qt}, do already include the detection of commonly used gestures like \emph{pinch} gestures. However, this detection logic is dependent on the application framework. Consequently, an application developer who wants to use multi-touch interaction in an application is forced to use an application framework that includes support for multi-touch gestures. Moreover, the set of supported gestures is limited by the application framework of choice. To incorporate a custom event in an application, the application developer needs to extend the framework. This requires extensive knowledge of the framework's architecture. Also, if the same gesture is needed in another application that is based on another framework, the detection logic has to be translated for use in that framework. Nevertheless, application frameworks are a necessity when it comes to fast, cross-platform development. A generic architecture design should aim to be compatible with existing frameworks, and provide a way to detect and extend gestures independent of the framework. Application frameworks are written in a specific programming language. To support multiple frameworks and programming languages, the architecture should be accessible for applications using a language-independent method of communication. This intention leads towards the concept of a dedicated gesture detection application that serves gestures to multiple applications at the same time. The scope of this thesis is limited to the detection of gestures on multi-touch surface devices. It presents a design for a generic gesture detection architecture for use in multi-touch based applications. A reference implementation of this design is used in some test case applications, whose goal is to test the effectiveness of the design and detect its shortcomings. \section{Structure of this document} % TODO: pas als thesis af is \chapter{Related work} \section{Gesture and Activity Recognition Toolkit} The Gesture and Activity Recognition Toolkit (GART) \cite{GART} is a toolkit for the development of gesture-based applications. The toolkit states that the best way to classify gestures is to use machine learning. The programmer trains a program to recognize using the machine learning library from the toolkit. The toolkit contains a callback mechanism that the programmer uses to execute custom code when a gesture is recognized. Though multi-touch input is not directly supported by the toolkit, the level of abstraction does allow for it to be implemented in the form of a ``touch'' sensor. The reason to use machine learning is the statement that gesture detection ``is likely to become increasingly complex and unmanageable'' when using a set of predefined rules to detect whether some sensor input can be seen as a specific gesture. This statement is not necessarily true. If the programmer is given a way to separate the detection of different types of gestures and flexibility in rule definitions, over-complexity can be avoided. \section{Gesture recognition implementation for Windows 7} The online article \cite{win7touch} presents a Windows 7 application, written in Microsofts .NET. The application shows detected gestures in a canvas. Gesture trackers keep track of stylus locations to detect specific gestures. The event types required to track a touch stylus are ``stylus down'', ``stylus move'' and ``stylus up'' events. A \texttt{GestureTrackerManager} object dispatches these events to gesture trackers. The application supports a limited number of pre-defined gestures. An important observation in this application is that different gestures are detected by different gesture trackers, thus separating gesture detection code into maintainable parts. \section{Analysis of related work} The simple Processing implementation of multi-touch events provides most of the functionality that can be found in existing multi-touch applications. In fact, many applications for mobile phones and tablets only use tap and scroll events. For this category of applications, using machine learning seems excessive. Though the representation of a gesture using a feature vector in a machine learning algorithm is a generic and formal way to define a gesture, a programmer-friendly architecture should also support simple, ``hard-coded'' detection code. A way to separate different pieces of gesture detection code, thus keeping a code library manageable and extendable, is to user different gesture trackers. \chapter{Design} \label{chapter:design} % Diagrams are defined in a separate file \input{data/diagrams} \section{Introduction} This chapter describes a design for a generic multi-touch gesture detection architecture. The architecture is represented as diagram of relations between different components. Sections \ref{sec:driver-support} to \ref{sec:daemon} define requirements for the architecture, and extend the diagram with components that meet these requirements. Section \ref{sec:example} describes an example usage of the architecture in an application. The input of the architecture comes from a multi-touch device driver. The task of the architecture is to translate this input to multi-touch gestures that are used by an application, as illustrated in figure \ref{fig:basicdiagram}. In the course of this chapter, the diagram is extended with the different components of the architecture. \basicdiagram \section{Supporting multiple drivers} \label{sec:driver-support} The TUIO protocol \cite{TUIO} is an example of a driver that can be used by multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate low-level touch events (see appendix \ref{app:tuio} for more details). These messages are specific to the API of the TUIO protocol. Other drivers may use different messages types. To support more than one driver in the architecture, there must be some translation from driver-specific messages to a common format for primitive touch events. After all, the gesture detection logic in a ``generic'' architecture should not be implemented based on driver-specific messages. The event types in this format should be chosen so that multiple drivers can trigger the same events. If each supported driver would add its own set of event types to the common format, the purpose of it being ``common'' would be defeated. A minimal expectation for a touch device driver is that it detects simple touch points, with a ``point'' being an object at an $(x, y)$ position on the touch surface. This yields a basic set of events: $\{point\_down, point\_move, point\_up\}$. The TUIO protocol supports fiducials\footnote{A fiducial is a pattern used by some touch devices to identify objects.}, which also have a rotational property. This results in a more extended set: $\{point\_down, point\_move, point\_up, object\_down, object\_move, object\_up,\\ object\_rotate\}$. Due to their generic nature, the use of these events is not limited to the TUIO protocol. Another driver that can keep apart rotated objects from simple touch points could also trigger them. The component that translates driver-specific messages to common events, will be called the \emph{event driver}. The event driver runs in a loop, receiving and analyzing driver messages. When a sequence of messages is analyzed as an event, the event driver delegates the event to other components in the architecture for translation to gestures. This communication flow is illustrated in figure \ref{fig:driverdiagram}. \driverdiagram Support for a touch driver can be added by adding an event driver implementation. The choice of event driver implementation that is used in an application is dependent on the driver support of the touch device being used. Because driver implementations have a common output format in the form of events, multiple event drivers can run at the same time (see figure \ref{fig:multipledrivers}). This design feature allows low-level events from multiple devices to be aggregated into high-level gestures. \multipledriversdiagram \section{Restricting events to a screen area} \label{sec:areas} % TODO: in introduction: gestures zijn opgebouwd uit meerdere primitieven Touch input devices are unaware of the graphical input widgets\footnote{``Widget'' is a name commonly used to identify an element of a graphical user interface (GUI).} rendered by an application, and therefore generate events that simply identify the screen location at which an event takes place. User interfaces of applications that do not run in full screen modus are contained in a window. Events which occur outside the application window should not be handled by the program in most cases. What's more, widget within the application window itself should be able to respond to different gestures. E.g. a button widget may respond to a ``tap'' gesture to be activated, whereas the application window responds to a ``pinch'' gesture to be resized. In order to be able to direct a gesture to a particular widget in an application, a gesture must be restricted to the area of the screen covered by that widget. An important question is if the architecture should offer a solution to this problem, or leave the task of assigning gestures to application widgets to the application developer. If the architecture does not provide a solution, the ``Event analysis'' component in figure \ref{fig:multipledrivers} receives all events that occur on the screen surface. The gesture detection logic thus uses all events as input to detect a gesture. This leaves no possibility for a gesture to occur at multiple screen positions at the same time. The problem is illustrated in figure \ref{fig:ex1}, where two widgets on the screen can be rotated independently. The rotation detection component that detects rotation gestures receives all four fingers as input. If the two groups of finger events are not separated by cluster detection, only one rotation event will occur. \examplefigureone A gesture detection component could perform a heuristic way of cluster detection based on the distance between events. However, this method cannot guarantee that a cluster of events corresponds with a particular application widget. In short, a gesture detection component is difficult to implement without awareness of the location of application widgets. Secondly, the application developer still needs to direct gestures to a particular widget manually. This requires geometric calculations in the application logic, which is a tedious and error-prone task for the developer. A better solution is to group events that occur inside the area covered by a widget, before passing them on to a gesture detection component. Different gesture detection components can then detect gestures simultaneously, based on different sets of input events. An area of the screen surface will be represented by an \emph{event area}. An event area filters input events based on their location, and then delegates events to gesture detection components that are assigned to the event area. Events which are located outside the event area are not delegated to its gesture detection components. In the example of figure \ref{fig:ex1}, the two rotatable widgets can be represented by two event areas, each having a different rotation detection component. \subsection*{Callback mechanism} When a gesture is detected by a gesture detection component, it must be handled by the client application. A common way to handle events in an application is a ``callback'' mechanism: the application developer binds a function to an event, that is called when the event occurs. Because of the familiarity of this concept with developers, the architecture uses a callback mechanism to handle gestures in an application. Callback handlers are bound to event areas, since events areas controls the grouping of events and thus the occurrence of gestures in an area of the screen. Figure \ref{fig:areadiagram} shows the position of areas in the architecture. \areadiagram %Note that the boundaries of an area are only used to group events, not %gestures. A gesture could occur outside the area that contains its %originating events, as illustrated by the example in figure \ref{fig:ex2}. %\examplefiguretwo A remark must be made about the use of event areas to assign events to the detection of some gesture. The concept of an event area is based on the assumption that the set or originating events that form a particular gesture, can be determined based exclusively on the location of the events. This is a reasonable assumption for simple touch objects whose only parameter is a position, such as a pen or a human finger. However, more complex touch objects can have additional parameters, such as rotational orientation or color. An even more generic concept is the \emph{event filter}, which detects whether an event should be assigned to a particular gesture detection component based on all available parameters. This level of abstraction provides additional methods of interaction. For example, a camera-based multi-touch surface could make a distinction between gestures performed with a blue gloved hand, and gestures performed with a green gloved hand. As mentioned in the introduction chapter [\ref{chapter:introduction}], the scope of this thesis is limited to multi-touch surface based devices, for which the \emph{event area} concept suffices. Section \ref{sec:eventfilter} explores the possibility of event areas to be replaced with event filters. \subsection{Area tree} \label{sec:tree} The most simple usage of event areas in the architecture would be a list of event areas. When the event driver delegates an event, it is accepted by each event area that contains the event coordinates. If the architecture were to be used in combination with an application framework like GTK \cite{GTK}, each GTK widget that responds to gestures should have a mirroring event area that synchronizes its location with that of the widget. Consider a panel with five buttons that all listen to a ``tap'' event. If the location of the panel changes as a result of movement of the application window, the positions of all buttons have to be updated too. This process is simplified by the arrangement of event areas in a tree structure. A root event area represents the panel, containing five other event areas which are positioned relative to the root area. The relative positions do not need to be updated when the panel area changes its position. GUI frameworks, like GTK, use this kind of tree structure to manage graphical widgets. If the GUI toolkit provides an API for requesting the position and size of a widget, a recommended first step when developing an application is to create some subclass of the area that automatically synchronizes with the position of a widget from the GUI framework. \subsection{Event propagation} \label{sec:eventpropagation} Another problem occurs when event areas overlap, as shown by figure \ref{fig:eventpropagation}. When the white square is rotated, the gray square should keep its current orientation. This means that events that are used for rotation of the white square, should not be used for rotation of the gray square. The use of event areas alone does not provide a solution here, since both the gray and the white event area accept an event that occurs within the white square. The problem described above is a common problem in GUI applications, and there is a common solution (used by GTK \cite{gtkeventpropagation}, among others). An event is passed to an ``event handler''. If the handler returns \texttt{true}, the event is considered ``handled'' and is not ``propagated'' to other widgets. Applied to the example of the rotating squares, the rotation detection component of the white square should stop the propagation of events to the event area of the gray square. This is illustrated in figure \ref{fig:eventpropagation}. In the example, rotation of the white square has priority over rotation of the gray square because the white area is the widget actually being touched at the screen surface. In general, events should be delegated to event areas according to the order in which the event areas are positioned over each other. The tree structure in which event areas are arranged, is an ideal tool to determine the order in which an event is delegated. Event areas in deeper layers of the tree are positioned on top of their parent. An object touching the screen is essentially touching the deepest event area in the tree that contains the triggered event. That event area should be the first to delegate the event to its gesture detection components, and then propagate the event up in the tree to its ancestors. A gesture detection component can stop the propagation of the event. An additional type of event propagation is ``immediate propagation'', which indicates propagation of an event from one gesture detection component to another. This is applicable when an event area uses more than one gesture detection component. One of the components can stop the immediate propagation of an event, so that the event is not passed to the next gesture detection component, nor to the ancestors of the event area. When regular propagation is stopped, the event is propagated to other gesture detection components first, before actually being stopped. \eventpropagationfigure \newpage \section{Detecting gestures from events} \label{sec:gesture-detection} The low-level events that are grouped by an event area must be translated to high-level gestures in some way. Simple gestures, such as a tap or the dragging of an element using one finger, are easy to detect by comparing the positions of sequential $point\_down$ and $point\_move$ events. More complex gestures, like the writing of a character from the alphabet, require more advanced detection algorithms. A way to detect these complex gestures based on a sequence of input events, is with the use of machine learning methods, such as the Hidden Markov Models \footnote{A Hidden Markov Model (HMM) is a statistical model without a memory, it can be used to detect gestures based on the current input state alone.} used for sign language detection by \cite{conf/gw/RigollKE97}. A sequence of input states can be mapped to a feature vector that is recognized as a particular gesture with a certain probability. An advantage of using machine learning with respect to an imperative programming style is that complex gestures can be described without the use of explicit detection logic. For example, the detection of the character `A' being written on the screen is difficult to implement using an imperative programming style, while a trained machine learning system can produce a match with relative ease. Sequences of events that are triggered by a multi-touch based surfaces are often of a manageable complexity. An imperative programming style is sufficient to detect many common gestures, like rotation and dragging. The imperative programming style is also familiar and understandable for a wide range of application developers. Therefore, the architecture should support an imperative style of gesture detection. A problem with an imperative programming style is that the explicit detection of different gestures requires different gesture detection components. If these components are not managed well, the detection logic is prone to become chaotic and over-complex. To manage complexity and support multiple styles of gesture detection logic, the architecture has adopted the tracker-based design as described by \cite{win7touch}. Different detection components are wrapped in separate gesture tracking units, or \emph{gesture trackers}. The input of a gesture tracker is provided by an event area in the form of events. Each gesture detection component is wrapped in a gesture tracker with a fixed type of input and output. Internally, the gesture tracker can adopt any programming style. A character recognition component can use an HMM, whereas a tap detection component defines a simple function that compares event coordinates. \trackerdiagram When a gesture tracker detects a gesture, this gesture is triggered in the corresponding event area. The event area then calls the callbacks which are bound to the gesture type by the application. Figure \ref{fig:trackerdiagram} shows the position of gesture trackers in the architecture. The use of gesture trackers as small detection units provides extendability of the architecture. A developer can write a custom gesture tracker and register it in the architecture. The tracker can use any type of detection logic internally, as long as it translates events to gestures. An example of a possible gesture tracker implementation is a ``transformation tracker'' that detects rotation, scaling and translation gestures. \section{Serving multiple applications} \label{sec:daemon} The design of the architecture is essentially complete with the components specified in this chapter. However, one specification has not yet been discussed: the ability to address the architecture using a method of communication independent of the application's programming language. If the architecture and a gesture-based application are written in the same language, the main loop of the architecture can run in a separate thread of the application. If the application is written in a different language, the architecture has to run in a separate process. Since the application needs to respond to gestures that are triggered by the architecture, there must be a communication layer between the separate processes. A common and efficient way of communication between two separate processes is through the use of a network protocol. In this particular case, the architecture can run as a daemon\footnote{``daemon'' is a name Unix uses to indicate that a process runs as a background process.} process, listening to driver messages and triggering gestures in registered applications. \vspace{-0.3em} \daemondiagram An advantage of a daemon setup is that it can serve multiple applications at the same time. Alternatively, each application that uses gesture interaction would start its own instance of the architecture in a separate process, which would be less efficient. \section{Example usage} \label{sec:example} This section describes an extended example to illustrate the data flow of the architecture. The example application listens to tap events on a button within an application window. The window also contains a draggable circle. The application window can be resized using \emph{pinch} gestures. Figure \ref{fig:examplediagram} shows the architecture created by the pseudo code below. \begin{verbatim} initialize GUI framework, creating a window and nessecary GUI widgets create a root event area that synchronizes position and size with the application window define 'rotation' gesture handler and bind it to the root event area create an event area with the position and radius of the circle define 'drag' gesture handler and bind it to the circle event area create an event area with the position and size of the button define 'tap' gesture handler and bind it to the button event area create a new event server and assign the created root event area to it start the event server in a new thread start the GUI main loop in the current thread \end{verbatim} \examplediagram \chapter{Test applications} A reference implementation of the design has been written in Python. Two test applications have been created to test if the design ``works'' in a practical application, and to detect its flaws. One application is mainly used to test the gesture tracker implementations. The other program uses multiple event areas in a tree structure, demonstrating event delegation and propagation. To test multi-touch interaction properly, a multi-touch device is required. The University of Amsterdam (UvA) has provided access to a multi-touch table from PQlabs. The table uses the TUIO protocol \cite{TUIO} to communicate touch events. See appendix \ref{app:tuio} for details regarding the TUIO protocol. %The reference implementation and its test applications are a Proof of Concept, %meant to show that the architecture design is effective. %that translates TUIO messages to some common multi-touch gestures. \section{Reference implementation} \label{sec:implementation} The reference implementation is written in Python and available at \cite{gitrepos}. The following component implementations are included: \textbf{Event drivers} \begin{itemize} \item TUIO driver, using only the support for simple touch points with an $(x, y)$ position. \end{itemize} \textbf{Gesture trackers} \begin{itemize} \item Basic tracker, supports $point\_down,~point\_move,~point\_up$ gestures. \item Tap tracker, supports $tap,~single\_tap,~double\_tap$ gestures. \item Transformation tracker, supports $rotate,~pinch,~drag$ gestures. \end{itemize} \textbf{Event areas} \begin{itemize} \item Circular area \item Rectangular area \item Full screen area \end{itemize} The implementation does not include a network protocol to support the daemon setup as described in section \ref{sec:daemon}. Therefore, it is only usable in Python programs. Thus, the two test programs are also written in Python. The event area implementations contain some geometric functions to determine whether an event should be delegated to an event area. All gesture trackers have been implemented using an imperative programming style. Technical details about the implementation of gesture detection are described in appendix \ref{app:implementation-details}. \section{Full screen Pygame program} %The goal of this program was to experiment with the TUIO %protocol, and to discover requirements for the architecture that was to be %designed. When the architecture design was completed, the program was rewritten %using the new architecture components. The original variant is still available %in the ``experimental'' folder of the Git repository \cite{gitrepos}. An implementation of the detection of some simple multi-touch gestures (single tap, double tap, rotation, pinch and drag) using Processing\footnote{Processing is a Java-based programming environment with an export possibility for Android. See also \cite{processing}.} can be found in a forum on the Processing website \cite{processingMT}. The program has been ported to Python and adapted to receive input from the TUIO protocol. The implementation is fairly simple, but it yields some appealing results (see figure \ref{fig:draw}). In the original program, the detection logic of all gestures is combined in a single class file. As predicted by the GART article \cite{GART}, this leads to over-complex code that is difficult to read and debug. The application has been rewritten using the reference implementation of the architecture. The detection code is separated into two different gesture trackers, which are the ``tap'' and ``transformation'' trackers mentioned in section \ref{sec:implementation}. The application receives TUIO events and translates them to \emph{point\_down}, \emph{point\_move} and \emph{point\_up} events. These events are then interpreted to be \emph{single tap}, \emph{double tap}, \emph{rotation} or \emph{pinch} gestures. The positions of all touch objects are drawn using the Pygame library. Since the Pygame library does not provide support to find the location of the display window, the root event area captures events in the entire screens surface. The application can be run either full screen or in windowed mode. If windowed, screen-wide gesture coordinates are mapped to the size of the Pyame window. In other words, the Pygame window always represents the entire touch surface. The output of the program can be seen in figure \ref{fig:draw}. \begin{figure}[h!] \center \includegraphics[scale=0.4]{data/pygame_draw.png} \caption{Output of the experimental drawing program. It draws all touch points and their centroid on the screen (the centroid is used for rotation and pinch detection). It also draws a green rectangle which \label{fig:draw} responds to rotation and pinch events.} \end{figure} \section{GTK/Cairo program} The second test application uses the GIMP toolkit (GTK+) \cite{GTK} to create its user interface. Since GTK+ defines a main event loop that is started in order to use the interface, the architecture implementation runs in a separate thread. The application creates a main window, whose size and position are synchronized with the root event area of the architecture. % TODO \emph{TODO: uitbreiden en screenshots erbij (dit programma is nog niet af)} \section{Discussion} % TODO \emph{TODO: Tekortkomingen aangeven die naar voren komen uit de tests} % Verschillende apparaten/drivers geven een ander soort primitieve events af. % Een vertaling van deze device-specifieke events naar een algemeen formaat van % events is nodig om gesture detection op een generieke manier te doen. % Door input van meerdere drivers door dezelfde event driver heen te laten gaan % is er ondersteuning voor meerdere apparaten tegelijkertijd. % Event driver levert low-level events. niet elke event hoort bij elke gesture, % dus moet er een filtering plaatsvinden van welke events bij welke gesture % horen. Areas geven de mogelijkheid hiervoor op apparaten waarvan het % filteren locatiegebonden is. % Het opsplitsten van gesture detection voor gesture trackers is een manier om % flexibel te zijn in ondersteunde types detection logic, en het beheersbaar % houden van complexiteit. \chapter{Suggestions for future work} \section{A generic method for grouping events} \label{sec:eventfilter} As mentioned in section \ref{sec:areas}, the concept of an event area is based on the assumption that the set or originating events that form a particular gesture, can be determined based exclusively on the location of the events. Since this thesis focuses on multi-touch surface based devices, and every object on a multi-touch surface has a position, this assumption is valid. However, the design of the architecture is meant to be more generic; to provide a structured design of managing gesture detection. An in-air gesture detection device, such as the Microsoft Kinect \cite{kinect}, provides 3D positions. Some multi-touch tables work with a camera that can also determine the shape and rotational orientation of objects touching the surface. For these devices, events delegated by the event driver have more parameters than a 2D position alone. The term ``area'' is not suitable to describe a group of events that consist of these parameters. A more generic term for a component that groups similar events is the \emph{event filter}. The concept of an event filter is based on the same principle as event areas, which is the assumption that gestures are formed from a subset of all events. However, an event filter takes all parameters of an event into account. An application on the camera-based multi-touch table could be to group all objects that are triangular into one filter, and all rectangular objects into another. Or, to separate small finger tips from large ones to be able to recognize whether a child or an adult touches the table. \section{Using a state machine for gesture detection} All gesture trackers in the reference implementation are based on the explicit analysis of events. Gesture detection is a widely researched subject, and the separation of detection logic into different trackers allows for multiple types of gesture detection in the same architecture. An interesting question is whether multi-touch gestures can be described in a formal way so that explicit detection code can be avoided. \cite{GART} and \cite{conf/gw/RigollKE97} propose the use of machine learning to recognizes gestures. To use machine learning, a set of input events forming a particular gesture must be represented as a feature vector. A learning set containing a set of feature vectors that represent some gesture ``teaches'' the machine what the feature of the gesture looks like. An advantage of using explicit gesture detection code is the fact that it provides a flexible way to specify the characteristics of a gesture, whereas the performance of feature vector-based machine learning is dependent on the quality of the learning set. A better method to describe a gesture might be to specify its features as a ``signature''. The parameters of such a signature must be be based on input events. When a set of input events matches the signature of some gesture, the gesture is be triggered. A gesture signature should be a complete description of all requirements the set of events must meet to form the gesture. A way to describe signatures on a multi-touch surface can be by the use of a state machine of its touch objects. The states of a simple touch point could be ${down, move, up, hold}$ to indicate respectively that a point is put down, is being moved, is held on a position for some time, and is released. In this case, a ``drag'' gesture can be described by the sequence $down - move - up$ and a ``select'' gesture by the sequence $down - hold$. If the set of states is not sufficient to describe a desired gesture, a developer can add additional states. For example, to be able to make a distinction between an element being ``dragged'' or ``thrown'' in some direction on the screen, two additional states can be added: ${start, stop}$ to indicate that a point starts and stops moving. The resulting state transitions are sequences $down - start - move - stop - up$ and $down - start - move - up$ (the latter does not include a $stop$ to indicate that the element must keep moving after the gesture had been performed). An additional way to describe even more complex gestures is to use other gestures in a signature. An example is to combine $select - drag$ to specify that an element must be selected before it can be dragged. The application of a state machine to describe multi-touch gestures is an subject well worth exploring in the future. \section{Daemon implementation} Section \ref{sec:daemon} proposes the usage of a network protocol to communicate between an architecture implementation and (multiple) gesture-based applications, as illustrated in figure \ref{fig:daemon}. The reference implementation does not support network communication. If the architecture design is to become successful in the future, the implementation of network communication is a must. ZeroMQ (or $\emptyset$MQ) \cite{ZeroMQ} is a high-performance software library with support for a wide range of programming languages. A good basis for a future implementation could use this library as the basis for its communication layer. If an implementation of the architecture will be released, a good idea would be to do so within a community of application developers. A community can contribute to a central database of gesture trackers, making the interaction from their applications available for use other applications. Ideally, a user can install a daemon process containing the architecture so that it is usable for any gesture-based application on the device. Applications that use the architecture can specify it as being a software dependency, or include it in a software distribution. \bibliographystyle{plain} \bibliography{report}{} \appendix \chapter{The TUIO protocol} \label{app:tuio} The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible objects, such as fingers or objects on a multi-touch table. Object information is sent to the TUIO UDP port (3333 by default). For efficiency reasons, the TUIO protocol is encoded using the Open Sound Control \cite[OSC]{OSC} format. An OSC server/client implementation is available for Python: pyOSC \cite{pyOSC}. A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}. However, the execution of an example script yields an error regarding Python's built-in \texttt{socket} library. Therefore, the reference implementation uses the pyOSC package to receive TUIO messages. The two most important message types of the protocol are ALIVE and SET messages. An ALIVE message contains the list of session id's that are currently ``active'', which in the case of multi-touch a table means that they are touching the screen. A SET message provides geometric information of a session id, such as position, velocity and acceleration. Each session id represents an object. The only type of objects on the multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y) position on the screen. ALIVE messages can be used to determine when an object touches and releases the screen. For example, if a session id was in the previous message but not in the current, The object it represents has been lifted from the screen. SET provide information about movement. In the case of simple (x, y) positions, only the movement vector of the position itself can be calculated. For more complex objects such as fiducials, arguments like rotational position and acceleration are also included. ALIVE and SET messages can be combined to create ``point down'', ``point move'' and ``point up'' events (as used by the Windows 7 implementation \cite{win7touch}). TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left top corner of the screen and $(1.0, 1.0)$ the right bottom corner. To focus events within a window, a translation to window coordinates is required in the client application, as stated by the online specification \cite{TUIO_specification}: \begin{quote} In order to compute the X and Y coordinates for the 2D profiles a TUIO tracker implementation needs to divide these values by the actual sensor dimension, while a TUIO client implementation consequently can scale these values back to the actual screen dimension. \end{quote} \chapter{Gesture detection in the reference implementation} \label{app:implementation-details} Both rotation and pinch use the centroid of all touch points. A \emph{rotation} gesture uses the difference in angle relative to the centroid of all touch points, and \emph{pinch} uses the difference in distance. Both values are normalized using division by the number of touch points. A pinch event contains a scale factor, and therefore uses a division of the current by the previous average distance to the centroid. % TODO \emph{TODO: rotatie en pinch gaan iets anders/uitgebreider worden beschreven.} \end{document}