| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642 |
- \documentclass[twoside,openright]{uva-bachelor-thesis}
- \usepackage[english]{babel}
- \usepackage[utf8]{inputenc}
- \usepackage{hyperref,graphicx,tikz,subfigure}
- % Link colors
- \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen}
- % Title Page
- \title{A generic architecture for gesture-based interaction}
- \author{Taddeüs Kroes}
- \supervisors{Dr. Robert G. Belleman (UvA)}
- \signedby{Dr. Robert G. Belleman (UvA)}
- \begin{document}
- % Title page
- \maketitle
- \begin{abstract}
- % TODO
- \end{abstract}
- % Set paragraph indentation
- \parindent 0pt
- \parskip 1.5ex plus 0.5ex minus 0.2ex
- % Table of content on separate page
- \tableofcontents
- \chapter{Introduction}
- \label{chapter:introduction}
- Surface-touch devices have evolved from pen-based tablets to single-touch
- trackpads, to multi-touch devices like smartphones and tablets. Multi-touch
- devices enable a user to interact with software using hand gestures, making the
- interaction more expressive and intuitive. These gestures are more complex than
- primitive ``click'' or ``tap'' events that are used by single-touch devices.
- Some examples of more complex gestures are ``pinch''\footnote{A ``pinch''
- gesture is formed by performing a pinching movement with multiple fingers on a
- multi-touch surface. Pinch gestures are often used to zoom in or out on an
- object.} and ``flick''\footnote{A ``flick'' gesture is the act of grabbing an
- object and throwing it in a direction on a touch surface, giving it momentum to
- move for some time after the hand releases the surface.} gestures.
- The complexity of gestures is not limited to navigation in smartphones. Some
- multi-touch devices are already capable of recognizing objects touching the
- screen \cite[Microsoft Surface]{mssurface}. In the near future, touch screens
- will possibly be extended or even replaced with in-air interaction (Microsoft's
- Kinect \cite{kinect} and the Leap \cite{leap}).
- The interaction devices mentioned above generate primitive events. In the case
- of surface-touch devices, these are \emph{down}, \emph{move} and \emph{up}
- events. Application programmers who want to incorporate complex, intuitive
- gestures in their application face the challenge of interpreting these
- primitive events as gestures. With the increasing complexity of gestures, the
- complexity of the logic required to detect these gestures increases as well.
- This challenge limits, or even deters the application developer to use complex
- gestures in an application.
- The main question in this research project is whether a generic architecture
- for the detection of complex interaction gestures can be designed, with the
- capability of managing the complexity of gesture detection logic. The ultimate
- goal would be to create an implementation of this architecture that can be
- extended to support a wide range of complex gestures. With the existence of
- such an implementation, application developers do not need to reinvent gesture
- detection for every new gesture-based application.
- Application frameworks for surface-touch devices, such as Nokia's Qt \cite{qt},
- do already include the detection of commonly used gestures like \emph{pinch}
- gestures. However, this detection logic is dependent on the application
- framework. Consequently, an application developer who wants to use multi-touch
- interaction in an application is forced to use an application framework that
- includes support for multi-touch gestures. Moreover, the set of supported
- gestures is limited by the application framework of choice. To incorporate a
- custom event in an application, the application developer needs to extend the
- framework. This requires extensive knowledge of the framework's architecture.
- Also, if the same gesture is needed in another application that is based on
- another framework, the detection logic has to be translated for use in that
- framework. Nevertheless, application frameworks are a necessity when it comes
- to fast, cross-platform development. A generic architecture design should aim
- to be compatible with existing frameworks, and provide a way to detect and
- extend gestures independent of the framework.
- Application frameworks are written in a specific programming language. To
- support multiple frameworks and programming languages, the architecture should
- be accessible for applications using a language-independent method of
- communication. This intention leads towards the concept of a dedicated gesture
- detection application that serves gestures to multiple applications at the same
- time.
- The scope of this thesis is limited to the detection of gestures on multi-touch
- surface devices. It presents a design for a generic gesture detection
- architecture for use in multi-touch based applications. A reference
- implementation of this design is used in some test case applications, whose
- goal is to test the effectiveness of the design and detect its shortcomings.
- \section{Structure of this document}
- % TODO: pas als thesis af is
- \chapter{Related work}
- \section{Gesture and Activity Recognition Toolkit}
- The Gesture and Activity Recognition Toolkit (GART) \cite{GART} is a
- toolkit for the development of gesture-based applications. The toolkit
- states that the best way to classify gestures is to use machine learning.
- The programmer trains a program to recognize using the machine learning
- library from the toolkit. The toolkit contains a callback mechanism that
- the programmer uses to execute custom code when a gesture is recognized.
- Though multi-touch input is not directly supported by the toolkit, the
- level of abstraction does allow for it to be implemented in the form of a
- ``touch'' sensor.
- The reason to use machine learning is the statement that gesture detection
- ``is likely to become increasingly complex and unmanageable'' when using a
- set of predefined rules to detect whether some sensor input can be seen as
- a specific gesture. This statement is not necessarily true. If the
- programmer is given a way to separate the detection of different types of
- gestures and flexibility in rule definitions, over-complexity can be
- avoided.
- \section{Gesture recognition implementation for Windows 7}
- The online article \cite{win7touch} presents a Windows 7 application,
- written in Microsofts .NET. The application shows detected gestures in a
- canvas. Gesture trackers keep track of stylus locations to detect specific
- gestures. The event types required to track a touch stylus are ``stylus
- down'', ``stylus move'' and ``stylus up'' events. A
- \texttt{GestureTrackerManager} object dispatches these events to gesture
- trackers. The application supports a limited number of pre-defined
- gestures.
- An important observation in this application is that different gestures are
- detected by different gesture trackers, thus separating gesture detection
- code into maintainable parts.
- % TODO: This is not really 'related', move it to somewhere else
- \section{Processing implementation of simple gestures in Android}
- An implementation of a detection architecture for some simple multi-touch
- gestures (tap, double tap, rotation, pinch and drag) using
- Processing\footnote{Processing is a Java-based development environment with
- an export possibility for Android. See also \url{http://processing.org/.}}
- can be found in a forum on the Processing website \cite{processingMT}. The
- implementation is fairly simple, but it yields some very appealing results.
- The detection logic of all gestures is combined in a single class. This
- does not allow for extendability, because the complexity of this class
- would increase to an undesirable level (as predicted by the GART article
- \cite{GART}). However, the detection logic itself is partially re-used in
- the reference implementation of the generic gesture detection architecture.
- \section{Analysis of related work}
- The simple Processing implementation of multi-touch events provides most of
- the functionality that can be found in existing multi-touch applications.
- In fact, many applications for mobile phones and tablets only use tap and
- scroll events. For this category of applications, using machine learning
- seems excessive. Though the representation of a gesture using a feature
- vector in a machine learning algorithm is a generic and formal way to
- define a gesture, a programmer-friendly architecture should also support
- simple, ``hard-coded'' detection code. A way to separate different pieces
- of gesture detection code, thus keeping a code library manageable and
- extendable, is to user different gesture trackers.
- % FIXME: change title below
- \chapter{Design}
- \label{chapter:design}
- % Diagrams are defined in a separate file
- \input{data/diagrams}
- \section{Introduction}
- This chapter describes the realization of a design for the generic
- multi-touch gesture detection architecture. The chapter represents the
- architecture as a diagram of relations between different components.
- Sections \ref{sec:driver-support} to \ref{sec:event-analysis} define
- requirements for the architecture, and extend the diagram with components
- that meet these requirements. Section \ref{sec:example} describes an
- example usage of the architecture in an application.
- \subsection*{Position of architecture in software}
- The input of the architecture comes from a multi-touch device driver.
- The task of the architecture is to translate this input to multi-touch
- gestures that are used by an application, as illustrated in figure
- \ref{fig:basicdiagram}. In the course of this chapter, the diagram is
- extended with the different components of the architecture.
- \basicdiagram{A diagram showing the position of the architecture
- relative to the device driver and a multi-touch application. The input
- of the architecture is given by a touch device driver. This output is
- translated to complex interaction gestures and passed to the
- application that is using the architecture.}
- \section{Supporting multiple drivers}
- \label{sec:driver-support}
- The TUIO protocol \cite{TUIO} is an example of a touch driver that can be
- used by multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate
- low-level touch events (see appendix \ref{app:tuio} for more details).
- These messages are specific to the API of the TUIO protocol. Other touch
- drivers may use very different messages types. To support more than
- one driver in the architecture, there must be some translation from
- driver-specific messages to a common format for primitive touch events.
- After all, the gesture detection logic in a ``generic'' architecture should
- not be implemented based on driver-specific messages. The event types in
- this format should be chosen so that multiple drivers can trigger the same
- events. If each supported driver would add its own set of event types to
- the common format, it the purpose of being ``common'' would be defeated.
- A minimal expectation for a touch device driver is that it detects simple
- touch points, with a ``point'' being an object at an $(x, y)$ position on
- the touch surface. This yields a basic set of events: $\{point\_down,
- point\_move, point\_up\}$.
- The TUIO protocol supports fiducials\footnote{A fiducial is a pattern used
- by some touch devices to identify objects.}, which also have a rotational
- property. This results in a more extended set: $\{point\_down, point\_move,
- point\_up, object\_down, object\_move, object\_up,\\ object\_rotate\}$.
- Due to their generic nature, the use of these events is not limited to the
- TUIO protocol. Another driver that can keep apart rotated objects from
- simple touch points could also trigger them.
- The component that translates driver-specific messages to common events,
- will be called the \emph{event driver}. The event driver runs in a loop,
- receiving and analyzing driver messages. When a sequence of messages is
- analyzed as an event, the event driver delegates the event to other
- components in the architecture for translation to gestures. This
- communication flow is illustrated in figure \ref{fig:driverdiagram}.
- Support for a touch device driver can be added by adding an event driver
- implementation. The choice of event driver implementation that is used in an
- application is dependent on the driver support of the touch device being
- used.
- \driverdiagram{Extension of the diagram from figure \ref{fig:basicdiagram},
- showing the position of the event driver in the architecture. The event
- driver translates driver-specific to a common set of events, which are
- delegated to analysis components that will interpret them as more complex
- gestures.}
- \section{Restricting events to a screen area}
- \label{sec:restricting-gestures}
- % TODO: in introduction: gestures zijn opgebouwd uit meerdere primitieven
- Touch input devices are unaware of the graphical input widgets rendered on
- screen and therefore generate events that simply identify the screen
- location at which an event takes place. In order to be able to direct a
- gesture to a particular widget on screen, an application programmer must
- restrict the occurrence of a gesture to the area of the screen covered by
- that widget. An important question is if the architecture should offer a
- solution to this problem, or leave it to the application developer to
- assign gestures to a widget.
- The latter case generates a problem when a gesture must be able to occur at
- different screen positions at the same time. Consider the example in figure
- \ref{fig:ex1}, where two squares must be able to be rotated independently
- at the same time. If the developer is left the task to assign a gesture to
- one of the squares, the event analysis component in figure
- \ref{fig:driverdiagram} receives all events that occur on the screen.
- Assuming that the rotation detection logic detects a single rotation
- gesture based on all of its input events, without detecting clusters of
- input events, only one rotation gesture can be triggered at the same time.
- When a user attempts to ``grab'' one rectangle with each hand, the events
- triggered by all fingers are combined to form a single rotation gesture
- instead of two separate gestures.
- \examplefigureone
- To overcome this problem, groups of events must be separated by the event
- analysis component before any detection logic is executed. An obvious
- solution for the given example is to incorporate this separation in the
- rotation detection logic itself, using a distance threshold that decides if
- an event should be added to an existing rotation gesture. Leaving the task
- of separating groups of events to detection logic leads to duplication of
- code. For instance, if the rotation gesture is replaced by a \emph{pinch}
- gesture that enlarges a rectangle, the detection logic that detects the
- pinch gesture would have to contain the same code that separates groups of
- events for different gestures. Also, a pinch gesture can be performed using
- fingers multiple hands as well, in which case the use of a simple distance
- threshold is insufficient. These examples show that gesture detection logic
- is hard to implement without knowledge about (the position of) the
- widget\footnote{``Widget'' is a name commonly used to identify an element
- of a graphical user interface (GUI).} that is receiving the gesture.
- Therefore, a better solution for the assignment of events to gesture
- detection is to make the gesture detection component aware of the locations
- of application widgets on the screen. To accomplish this, the architecture
- must contain a representation of the screen area covered by a widget. This
- leads to the concept of an \emph{area}, which represents an area on the
- touch surface in which events should be grouped before being delegated to a
- form of gesture detection. Examples of simple area implementations are
- rectangles and circles. However, area's could be made to represent more
- complex shapes.
- An area groups events and assigns them to some piece of gesture detection
- logic. This possibly triggers a gesture, which must be handled by the
- client application. A common way to handle framework events in an
- application is a ``callback'' mechanism: the application developer binds a
- function to an event, that is called by the framework when the event
- occurs. Because of the familiarity of this concept with developers, the
- architecture uses a callback mechanism to handle gestures in an
- application. Since an area controls the grouping of events and thus the
- occurrence of gestures in an area, gesture handlers for a specific gesture
- type are bound to an area. Figure \ref{fig:areadiagram} shows the position
- of areas in the architecture.
- \areadiagram{Extension of the diagram from figure \ref{fig:driverdiagram},
- showing the position of areas in the architecture. An area delegate events
- to a gesture detection component that trigger gestures. The area then calls
- the handler that is bound to the gesture type by the application.}
- Note that the boundaries of an area are only used to group events, not
- gestures. A gesture could occur outside the area that contains its
- originating events, as illustrated by the example in figure \ref{fig:ex2}.
- \examplefiguretwo
- A remark must be made about the use of areas to assign events the detection
- of some gesture. The concept of an ``area'' is based on the assumption that
- the set or originating events that form a particular gesture, can be
- determined based exclusively on the location of the events. This is a
- reasonable assumption for simple touch objects whose only parameter is a
- position, such as a pen or a human finger. However, more complex touch
- objects can have additional parameters, such as rotational orientation or
- color. An even more generic concept is the \emph{event filter}, which
- detects whether an event should be assigned to a particular piece of
- gesture detection based on all available parameters. This level of
- abstraction allows for constraints like ``Use all blue objects within a
- widget for rotation, and green objects for tapping.''. As mentioned in the
- introduction chapter [\ref{chapter:introduction}], the scope of this thesis
- is limited to multi-touch surface based devices, for which the \emph{area}
- concept suffices. Section \ref{sec:eventfilter} explores the possibility of
- areas to be replaced with event filters.
- \subsection*{Reserving an event for a gesture}
- The most simple implementation of areas in the architecture is a list of
- areas. When the event driver delegates an event, it is delegated to gesture
- detection by each area that contains the event coordinates. A problem
- occurs when areas overlap, as shown by figure \ref{fig:ex3}. When the
- white rectangle is rotated, the gray square should keep its current
- orientation. This means that events that are used for rotation of the white
- square, should not be used for rotation of the gray square. To achieve
- this, there must be some communication between the rotation detection
- components of the two squares.
- \examplefigurethree
- a
- --------
- % simpelste aanpak is een lijst van area's, als event erin past dan
- % delegeren. probleem (aangeven met voorbeeld van geneste widgets die
- % allebei naar tap luisteren): als area's overlappen wil je bepaalde events
- % reserveren voor bepaalde stukjes detection logic
- % oplossing: area'a opslaan in boomstructuur en event propagatie gebruiken
- % -> area binnenin een parent area kan events propageren naar die parent,
- % detection logic kan propagatie tegenhouden. om omhoog in de boom te
- % propageren moet het event eerst bij de leaf aankomen, dus eerst delegatie
- % tot laagste leaf node die het event bevat.
- % speciaal geval: overlappende area's in dezelfde laag v/d boom. in dat
- % geval: area die later is toegevoegd (rechter sibling) wordt aangenomen
- % bovenop de sibling links ervan te liggen en krijgt dus eerst het event.
- % Als propagatie in bovenste (rechter) area wordt gestopt, krijgt de
- % achterste (linker) sibling deze ook niet meer
- % bijkomend voordeel van boomstructuur: makkelijk te integreren in bijv GTK
- % die voor widgets een boomstructuur gebruikt -> voor elke widget die touch
- % events heeft een area aanmaken
- %For example, a button tap\footnote{A ``tap'' gesture is triggered when a
- %touch object releases a touch surface within a certain time and distance
- %from the point where it initially touched the surface.} should only occur
- %on the button itself, and not in any other area of the screen. A solution
- %to this problem is the use of \emph{widgets}. The button from the example
- %can be represented as a rectangular widget with a position and size. The
- %position and size are compared with event coordinates to determine whether
- %an event should occur within the button.
- \subsection*{Area tree}
- A problem occurs when widgets overlap. If a button in placed over a
- container and an event occurs occurs inside the button, should the
- button handle the event first? And, should the container receive the
- event at all or should it be reserved for the button?.
- The solution to this problem is to save widgets in a tree structure.
- There is one root widget, whose size is limited by the size of the
- touch screen. Being the leaf widget, and thus the widget that is
- actually touched when an object touches the device, the button widget
- should receive an event before its container does. However, events
- occur on a screen-wide level and thus at the root level of the widget
- tree. Therefore, an event is delegated in the tree before any analysis
- is performed. Delegation stops at the ``lowest'' widget in the three
- containing the event coordinates. That widget then performs some
- analysis of the event, after which the event is released back to the
- parent widget for analysis. This release of an event to a parent widget
- is called \emph{propagation}. To be able to reserve an event to some
- widget or analysis, the propagation of an event can be stopped during
- analysis.
- % TODO: inspired by JavaScript DOM
- Many GUI frameworks, like GTK \cite{GTK}, also use a tree structure to
- manage their widgets. This makes it easy to connect the architecture to
- such a framework. For example, the programmer can define a
- \texttt{GtkTouchWidget} that synchronises the position of a touch
- widget with that of a GTK widget, using GTK signals.
- \section{Detecting gestures from events}
- \label{sec:gesture-detection}
- The events that are grouped by areas must be translated to complex gestures
- in some way. This analysis is specific to the type of gesture being
- detected. E.g. the detection of a ``tap'' gesture is very different from
- detection of a ``rotate'' gesture. The architecture has adopted the
- \emph{gesture tracker}-based design described by \cite{win7touch}, which
- separates the detection of different gestures into different \emph{gesture
- trackers}. This keeps the different pieces of gesture detection code
- manageable and extendable. A single gesture tracker detects a specific set
- of gesture types, given a set of primitive events. An example of a possible
- gesture tracker implementation is a ``transformation tracker'' that detects
- rotation, scaling and translation gestures.
- % TODO: een formele definitie van gestures zou wellicht beter zijn, maar
- % wordt niet gegeven in deze thesis (wel besproken in future work)
- \subsection*{Assignment of a gesture tracker to an area}
- As explained in section \ref{sec:callbacks}, events are delegated from
- a widget to some event analysis. The analysis component of a widget
- consists of a list of gesture trackers, each tracking a specific set of
- gestures. No two trackers in the list should be tracking the same
- gesture type.
- When a handler for a gesture is ``bound'' to a widget, the widget
- asserts that it has a tracker that is tracking this gesture. Thus, the
- programmer does not create gesture trackers manually. Figure
- \ref{fig:trackerdiagram} shows the position of gesture trackers in the
- architecture.
- \trackerdiagram{Extension of the diagram from figure
- \ref{fig:widgetdiagram}, showing the position of gesture trackers in
- the architecture.}
- \section{Serving multiple applications}
- % TODO
- \section{Example usage}
- \label{sec:example}
- This section describes an example that illustrates the API of the
- architecture. The example application listens to tap events on a button.
- The button is located inside an application window, which can be resized
- using pinch gestures.
- % TODO: comments weg, in pseudocode opschrijven, uitbreiden met draggable
- % circle en illustrerende figuur
- \begin{verbatim}
- initialize GUI, creating a window
- # Add widgets representing the application window and button
- rootwidget = new rectangular Widget object
- set rootwidget position and size to that of the application window
- buttonwidget = new rectangular Widget object
- set buttonwidget position and size to that of the GUI button
- # Create an event server that will be started later
- server = new EventServer object
- set rootwidget as root widget for server
- # Define handlers and bind them to corresponding widgets
- begin function resize_handler(gesture)
- resize GUI window
- update position and size of root wigdet
- end function
- begin function tap_handler_handler(gesture)
- # Perform some action that the button is meant to do
- end function
- bind ('pinch', resize_handler) to rootwidget
- bind ('tap', tap_handler) to buttonwidget
- # Start event server (which in turn starts a driver-specific event server)
- start server
- \end{verbatim}
- \examplediagram{Diagram representation of the example above. Dotted arrows
- represent gestures, normal arrows represent events (unless labeled
- otherwise).}
- \chapter{Test applications}
- \section{Reference implementation in Python}
- \label{sec:implementation}
- % TODO
- % alleen window.contains op point down, niet move/up
- % een paar simpele windows en trackers
- To test multi-touch interaction properly, a multi-touch device is required. The
- University of Amsterdam (UvA) has provided access to a multi-touch table from
- PQlabs. The table uses the TUIO protocol \cite{TUIO} to communicate touch
- events. See appendix \ref{app:tuio} for details regarding the TUIO protocol.
- The reference implementation is a Proof of Concept that translates TUIO
- messages to some simple touch gestures (see appendix \ref{app:implementation}
- for details).
- % omdat we alleen deze tafel hebben kunnen we het concept van de event driver
- % alleen met het TUIO protocol testen, en niet vergelijken met andere drivers
- % TODO
- % testprogramma's met PyGame/Cairo
- \chapter{Suggestions for future work}
- % TODO
- % - network protocol (ZeroMQ) voor meerdere talen en simultane processen
- % - gebruik formelere definitie van gestures ipv expliciete detection logic,
- % bijv. een state machine
- % - volgende stap: maken van een library die meerdere drivers en complexe
- % gestures bevat
- % - "event filter" ipv "area"
- \section{A generic way for grouping events}
- \label{sec:eventfilter}
- \bibliographystyle{plain}
- \bibliography{report}{}
- \appendix
- \chapter{The TUIO protocol}
- \label{app:tuio}
- The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible
- objects, such as fingers or objects on a multi-touch table. Object information
- is sent to the TUIO UDP port (3333 by default).
- For efficiency reasons, the TUIO protocol is encoded using the Open Sound
- Control \cite[OSC]{OSC} format. An OSC server/client implementation is
- available for Python: pyOSC \cite{pyOSC}.
- A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}.
- However, the execution of an example script yields an error regarding Python's
- built-in \texttt{socket} library. Therefore, the reference implementation uses
- the pyOSC package to receive TUIO messages.
- The two most important message types of the protocol are ALIVE and SET
- messages. An ALIVE message contains the list of session id's that are currently
- ``active'', which in the case of multi-touch a table means that they are
- touching the screen. A SET message provides geometric information of a session
- id, such as position, velocity and acceleration.
- Each session id represents an object. The only type of objects on the
- multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y)
- position on the screen.
- ALIVE messages can be used to determine when an object touches and releases the
- screen. For example, if a session id was in the previous message but not in the
- current, The object it represents has been lifted from the screen.
- SET provide information about movement. In the case of simple (x, y) positions,
- only the movement vector of the position itself can be calculated. For more
- complex objects such as fiducials, arguments like rotational position and
- acceleration are also included.
- ALIVE and SET messages can be combined to create ``point down'', ``point move''
- and ``point up'' events (as used by the Windows 7 implementation
- \cite{win7touch}).
- TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left
- top corner of the screen and $(1.0, 1.0)$ the right bottom corner. To focus
- events within a window, a translation to window coordinates is required in the
- client application, as stated by the online specification
- \cite{TUIO_specification}:
- \begin{quote}
- In order to compute the X and Y coordinates for the 2D profiles a TUIO
- tracker implementation needs to divide these values by the actual sensor
- dimension, while a TUIO client implementation consequently can scale these
- values back to the actual screen dimension.
- \end{quote}
- \chapter{Experimental program}
- \label{app:experiment}
- % TODO: rewrite intro
- When designing a software library, its API should be understandable and easy to
- use for programmers. To find out the basic requirements of the API to be
- usable, an experimental program has been written based on the Processing code
- from \cite{processingMT}. The program receives TUIO events and translates them
- to point \emph{down}, \emph{move} and \emph{up} events. These events are then
- interpreted to be (double or single) \emph{tap}, \emph{rotation} or
- \emph{pinch} gestures. A simple drawing program then draws the current state to
- the screen using the PyGame library. The output of the program can be seen in
- figure \ref{fig:draw}.
- \begin{figure}[H]
- \center
- \label{fig:draw}
- \includegraphics[scale=0.4]{data/experimental_draw.png}
- \caption{Output of the experimental drawing program. It draws the touch
- points and their centroid on the screen (the centroid is used as center
- point for rotation and pinch detection). It also draws a green
- rectangle which responds to rotation and pinch events.}
- \end{figure}
- One of the first observations is the fact that TUIO's \texttt{SET} messages use
- the TUIO coordinate system, as described in appendix \ref{app:tuio}. The test
- program multiplies these with its own dimensions, thus showing the entire
- screen in its window. Also, the implementation only works using the TUIO
- protocol. Other drivers are not supported.
- Though using relatively simple math, the rotation and pinch events work
- surprisingly well. Both rotation and pinch use the centroid of all touch
- points. A \emph{rotation} gesture uses the difference in angle relative to the
- centroid of all touch points, and \emph{pinch} uses the difference in distance.
- Both values are normalized using division by the number of touch points. A
- pinch event contains a scale factor, and therefore uses a division of the
- current by the previous average distance to the centroid.
- There is a flaw in this implementation. Since the centroid is calculated using
- all current touch points, there cannot be two or more rotation or pinch
- gestures simultaneously. On a large multi-touch table, it is desirable to
- support interaction with multiple hands, or multiple persons, at the same time.
- This kind of application-specific requirements should be defined in the
- application itself, whereas the experimental implementation defines detection
- algorithms based on its test program.
- Also, the different detection algorithms are all implemented in the same file,
- making it complex to read or debug, and difficult to extend.
- \end{document}
|