report.tex 33 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642
  1. \documentclass[twoside,openright]{uva-bachelor-thesis}
  2. \usepackage[english]{babel}
  3. \usepackage[utf8]{inputenc}
  4. \usepackage{hyperref,graphicx,tikz,subfigure}
  5. % Link colors
  6. \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen}
  7. % Title Page
  8. \title{A generic architecture for gesture-based interaction}
  9. \author{Taddeüs Kroes}
  10. \supervisors{Dr. Robert G. Belleman (UvA)}
  11. \signedby{Dr. Robert G. Belleman (UvA)}
  12. \begin{document}
  13. % Title page
  14. \maketitle
  15. \begin{abstract}
  16. % TODO
  17. \end{abstract}
  18. % Set paragraph indentation
  19. \parindent 0pt
  20. \parskip 1.5ex plus 0.5ex minus 0.2ex
  21. % Table of content on separate page
  22. \tableofcontents
  23. \chapter{Introduction}
  24. \label{chapter:introduction}
  25. Surface-touch devices have evolved from pen-based tablets to single-touch
  26. trackpads, to multi-touch devices like smartphones and tablets. Multi-touch
  27. devices enable a user to interact with software using hand gestures, making the
  28. interaction more expressive and intuitive. These gestures are more complex than
  29. primitive ``click'' or ``tap'' events that are used by single-touch devices.
  30. Some examples of more complex gestures are ``pinch''\footnote{A ``pinch''
  31. gesture is formed by performing a pinching movement with multiple fingers on a
  32. multi-touch surface. Pinch gestures are often used to zoom in or out on an
  33. object.} and ``flick''\footnote{A ``flick'' gesture is the act of grabbing an
  34. object and throwing it in a direction on a touch surface, giving it momentum to
  35. move for some time after the hand releases the surface.} gestures.
  36. The complexity of gestures is not limited to navigation in smartphones. Some
  37. multi-touch devices are already capable of recognizing objects touching the
  38. screen \cite[Microsoft Surface]{mssurface}. In the near future, touch screens
  39. will possibly be extended or even replaced with in-air interaction (Microsoft's
  40. Kinect \cite{kinect} and the Leap \cite{leap}).
  41. The interaction devices mentioned above generate primitive events. In the case
  42. of surface-touch devices, these are \emph{down}, \emph{move} and \emph{up}
  43. events. Application programmers who want to incorporate complex, intuitive
  44. gestures in their application face the challenge of interpreting these
  45. primitive events as gestures. With the increasing complexity of gestures, the
  46. complexity of the logic required to detect these gestures increases as well.
  47. This challenge limits, or even deters the application developer to use complex
  48. gestures in an application.
  49. The main question in this research project is whether a generic architecture
  50. for the detection of complex interaction gestures can be designed, with the
  51. capability of managing the complexity of gesture detection logic. The ultimate
  52. goal would be to create an implementation of this architecture that can be
  53. extended to support a wide range of complex gestures. With the existence of
  54. such an implementation, application developers do not need to reinvent gesture
  55. detection for every new gesture-based application.
  56. Application frameworks for surface-touch devices, such as Nokia's Qt \cite{qt},
  57. do already include the detection of commonly used gestures like \emph{pinch}
  58. gestures. However, this detection logic is dependent on the application
  59. framework. Consequently, an application developer who wants to use multi-touch
  60. interaction in an application is forced to use an application framework that
  61. includes support for multi-touch gestures. Moreover, the set of supported
  62. gestures is limited by the application framework of choice. To incorporate a
  63. custom event in an application, the application developer needs to extend the
  64. framework. This requires extensive knowledge of the framework's architecture.
  65. Also, if the same gesture is needed in another application that is based on
  66. another framework, the detection logic has to be translated for use in that
  67. framework. Nevertheless, application frameworks are a necessity when it comes
  68. to fast, cross-platform development. A generic architecture design should aim
  69. to be compatible with existing frameworks, and provide a way to detect and
  70. extend gestures independent of the framework.
  71. Application frameworks are written in a specific programming language. To
  72. support multiple frameworks and programming languages, the architecture should
  73. be accessible for applications using a language-independent method of
  74. communication. This intention leads towards the concept of a dedicated gesture
  75. detection application that serves gestures to multiple applications at the same
  76. time.
  77. The scope of this thesis is limited to the detection of gestures on multi-touch
  78. surface devices. It presents a design for a generic gesture detection
  79. architecture for use in multi-touch based applications. A reference
  80. implementation of this design is used in some test case applications, whose
  81. goal is to test the effectiveness of the design and detect its shortcomings.
  82. \section{Structure of this document}
  83. % TODO: pas als thesis af is
  84. \chapter{Related work}
  85. \section{Gesture and Activity Recognition Toolkit}
  86. The Gesture and Activity Recognition Toolkit (GART) \cite{GART} is a
  87. toolkit for the development of gesture-based applications. The toolkit
  88. states that the best way to classify gestures is to use machine learning.
  89. The programmer trains a program to recognize using the machine learning
  90. library from the toolkit. The toolkit contains a callback mechanism that
  91. the programmer uses to execute custom code when a gesture is recognized.
  92. Though multi-touch input is not directly supported by the toolkit, the
  93. level of abstraction does allow for it to be implemented in the form of a
  94. ``touch'' sensor.
  95. The reason to use machine learning is the statement that gesture detection
  96. ``is likely to become increasingly complex and unmanageable'' when using a
  97. set of predefined rules to detect whether some sensor input can be seen as
  98. a specific gesture. This statement is not necessarily true. If the
  99. programmer is given a way to separate the detection of different types of
  100. gestures and flexibility in rule definitions, over-complexity can be
  101. avoided.
  102. \section{Gesture recognition implementation for Windows 7}
  103. The online article \cite{win7touch} presents a Windows 7 application,
  104. written in Microsofts .NET. The application shows detected gestures in a
  105. canvas. Gesture trackers keep track of stylus locations to detect specific
  106. gestures. The event types required to track a touch stylus are ``stylus
  107. down'', ``stylus move'' and ``stylus up'' events. A
  108. \texttt{GestureTrackerManager} object dispatches these events to gesture
  109. trackers. The application supports a limited number of pre-defined
  110. gestures.
  111. An important observation in this application is that different gestures are
  112. detected by different gesture trackers, thus separating gesture detection
  113. code into maintainable parts.
  114. % TODO: This is not really 'related', move it to somewhere else
  115. \section{Processing implementation of simple gestures in Android}
  116. An implementation of a detection architecture for some simple multi-touch
  117. gestures (tap, double tap, rotation, pinch and drag) using
  118. Processing\footnote{Processing is a Java-based development environment with
  119. an export possibility for Android. See also \url{http://processing.org/.}}
  120. can be found in a forum on the Processing website \cite{processingMT}. The
  121. implementation is fairly simple, but it yields some very appealing results.
  122. The detection logic of all gestures is combined in a single class. This
  123. does not allow for extendability, because the complexity of this class
  124. would increase to an undesirable level (as predicted by the GART article
  125. \cite{GART}). However, the detection logic itself is partially re-used in
  126. the reference implementation of the generic gesture detection architecture.
  127. \section{Analysis of related work}
  128. The simple Processing implementation of multi-touch events provides most of
  129. the functionality that can be found in existing multi-touch applications.
  130. In fact, many applications for mobile phones and tablets only use tap and
  131. scroll events. For this category of applications, using machine learning
  132. seems excessive. Though the representation of a gesture using a feature
  133. vector in a machine learning algorithm is a generic and formal way to
  134. define a gesture, a programmer-friendly architecture should also support
  135. simple, ``hard-coded'' detection code. A way to separate different pieces
  136. of gesture detection code, thus keeping a code library manageable and
  137. extendable, is to user different gesture trackers.
  138. % FIXME: change title below
  139. \chapter{Design}
  140. \label{chapter:design}
  141. % Diagrams are defined in a separate file
  142. \input{data/diagrams}
  143. \section{Introduction}
  144. This chapter describes the realization of a design for the generic
  145. multi-touch gesture detection architecture. The chapter represents the
  146. architecture as a diagram of relations between different components.
  147. Sections \ref{sec:driver-support} to \ref{sec:event-analysis} define
  148. requirements for the architecture, and extend the diagram with components
  149. that meet these requirements. Section \ref{sec:example} describes an
  150. example usage of the architecture in an application.
  151. \subsection*{Position of architecture in software}
  152. The input of the architecture comes from a multi-touch device driver.
  153. The task of the architecture is to translate this input to multi-touch
  154. gestures that are used by an application, as illustrated in figure
  155. \ref{fig:basicdiagram}. In the course of this chapter, the diagram is
  156. extended with the different components of the architecture.
  157. \basicdiagram{A diagram showing the position of the architecture
  158. relative to the device driver and a multi-touch application. The input
  159. of the architecture is given by a touch device driver. This output is
  160. translated to complex interaction gestures and passed to the
  161. application that is using the architecture.}
  162. \section{Supporting multiple drivers}
  163. \label{sec:driver-support}
  164. The TUIO protocol \cite{TUIO} is an example of a touch driver that can be
  165. used by multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate
  166. low-level touch events (see appendix \ref{app:tuio} for more details).
  167. These messages are specific to the API of the TUIO protocol. Other touch
  168. drivers may use very different messages types. To support more than
  169. one driver in the architecture, there must be some translation from
  170. driver-specific messages to a common format for primitive touch events.
  171. After all, the gesture detection logic in a ``generic'' architecture should
  172. not be implemented based on driver-specific messages. The event types in
  173. this format should be chosen so that multiple drivers can trigger the same
  174. events. If each supported driver would add its own set of event types to
  175. the common format, it the purpose of being ``common'' would be defeated.
  176. A minimal expectation for a touch device driver is that it detects simple
  177. touch points, with a ``point'' being an object at an $(x, y)$ position on
  178. the touch surface. This yields a basic set of events: $\{point\_down,
  179. point\_move, point\_up\}$.
  180. The TUIO protocol supports fiducials\footnote{A fiducial is a pattern used
  181. by some touch devices to identify objects.}, which also have a rotational
  182. property. This results in a more extended set: $\{point\_down, point\_move,
  183. point\_up, object\_down, object\_move, object\_up,\\ object\_rotate\}$.
  184. Due to their generic nature, the use of these events is not limited to the
  185. TUIO protocol. Another driver that can keep apart rotated objects from
  186. simple touch points could also trigger them.
  187. The component that translates driver-specific messages to common events,
  188. will be called the \emph{event driver}. The event driver runs in a loop,
  189. receiving and analyzing driver messages. When a sequence of messages is
  190. analyzed as an event, the event driver delegates the event to other
  191. components in the architecture for translation to gestures. This
  192. communication flow is illustrated in figure \ref{fig:driverdiagram}.
  193. Support for a touch device driver can be added by adding an event driver
  194. implementation. The choice of event driver implementation that is used in an
  195. application is dependent on the driver support of the touch device being
  196. used.
  197. \driverdiagram{Extension of the diagram from figure \ref{fig:basicdiagram},
  198. showing the position of the event driver in the architecture. The event
  199. driver translates driver-specific to a common set of events, which are
  200. delegated to analysis components that will interpret them as more complex
  201. gestures.}
  202. \section{Restricting events to a screen area}
  203. \label{sec:restricting-gestures}
  204. % TODO: in introduction: gestures zijn opgebouwd uit meerdere primitieven
  205. Touch input devices are unaware of the graphical input widgets rendered on
  206. screen and therefore generate events that simply identify the screen
  207. location at which an event takes place. In order to be able to direct a
  208. gesture to a particular widget on screen, an application programmer must
  209. restrict the occurrence of a gesture to the area of the screen covered by
  210. that widget. An important question is if the architecture should offer a
  211. solution to this problem, or leave it to the application developer to
  212. assign gestures to a widget.
  213. The latter case generates a problem when a gesture must be able to occur at
  214. different screen positions at the same time. Consider the example in figure
  215. \ref{fig:ex1}, where two squares must be able to be rotated independently
  216. at the same time. If the developer is left the task to assign a gesture to
  217. one of the squares, the event analysis component in figure
  218. \ref{fig:driverdiagram} receives all events that occur on the screen.
  219. Assuming that the rotation detection logic detects a single rotation
  220. gesture based on all of its input events, without detecting clusters of
  221. input events, only one rotation gesture can be triggered at the same time.
  222. When a user attempts to ``grab'' one rectangle with each hand, the events
  223. triggered by all fingers are combined to form a single rotation gesture
  224. instead of two separate gestures.
  225. \examplefigureone
  226. To overcome this problem, groups of events must be separated by the event
  227. analysis component before any detection logic is executed. An obvious
  228. solution for the given example is to incorporate this separation in the
  229. rotation detection logic itself, using a distance threshold that decides if
  230. an event should be added to an existing rotation gesture. Leaving the task
  231. of separating groups of events to detection logic leads to duplication of
  232. code. For instance, if the rotation gesture is replaced by a \emph{pinch}
  233. gesture that enlarges a rectangle, the detection logic that detects the
  234. pinch gesture would have to contain the same code that separates groups of
  235. events for different gestures. Also, a pinch gesture can be performed using
  236. fingers multiple hands as well, in which case the use of a simple distance
  237. threshold is insufficient. These examples show that gesture detection logic
  238. is hard to implement without knowledge about (the position of) the
  239. widget\footnote{``Widget'' is a name commonly used to identify an element
  240. of a graphical user interface (GUI).} that is receiving the gesture.
  241. Therefore, a better solution for the assignment of events to gesture
  242. detection is to make the gesture detection component aware of the locations
  243. of application widgets on the screen. To accomplish this, the architecture
  244. must contain a representation of the screen area covered by a widget. This
  245. leads to the concept of an \emph{area}, which represents an area on the
  246. touch surface in which events should be grouped before being delegated to a
  247. form of gesture detection. Examples of simple area implementations are
  248. rectangles and circles. However, area's could be made to represent more
  249. complex shapes.
  250. An area groups events and assigns them to some piece of gesture detection
  251. logic. This possibly triggers a gesture, which must be handled by the
  252. client application. A common way to handle framework events in an
  253. application is a ``callback'' mechanism: the application developer binds a
  254. function to an event, that is called by the framework when the event
  255. occurs. Because of the familiarity of this concept with developers, the
  256. architecture uses a callback mechanism to handle gestures in an
  257. application. Since an area controls the grouping of events and thus the
  258. occurrence of gestures in an area, gesture handlers for a specific gesture
  259. type are bound to an area. Figure \ref{fig:areadiagram} shows the position
  260. of areas in the architecture.
  261. \areadiagram{Extension of the diagram from figure \ref{fig:driverdiagram},
  262. showing the position of areas in the architecture. An area delegate events
  263. to a gesture detection component that trigger gestures. The area then calls
  264. the handler that is bound to the gesture type by the application.}
  265. Note that the boundaries of an area are only used to group events, not
  266. gestures. A gesture could occur outside the area that contains its
  267. originating events, as illustrated by the example in figure \ref{fig:ex2}.
  268. \examplefiguretwo
  269. A remark must be made about the use of areas to assign events the detection
  270. of some gesture. The concept of an ``area'' is based on the assumption that
  271. the set or originating events that form a particular gesture, can be
  272. determined based exclusively on the location of the events. This is a
  273. reasonable assumption for simple touch objects whose only parameter is a
  274. position, such as a pen or a human finger. However, more complex touch
  275. objects can have additional parameters, such as rotational orientation or
  276. color. An even more generic concept is the \emph{event filter}, which
  277. detects whether an event should be assigned to a particular piece of
  278. gesture detection based on all available parameters. This level of
  279. abstraction allows for constraints like ``Use all blue objects within a
  280. widget for rotation, and green objects for tapping.''. As mentioned in the
  281. introduction chapter [\ref{chapter:introduction}], the scope of this thesis
  282. is limited to multi-touch surface based devices, for which the \emph{area}
  283. concept suffices. Section \ref{sec:eventfilter} explores the possibility of
  284. areas to be replaced with event filters.
  285. \subsection*{Reserving an event for a gesture}
  286. The most simple implementation of areas in the architecture is a list of
  287. areas. When the event driver delegates an event, it is delegated to gesture
  288. detection by each area that contains the event coordinates. A problem
  289. occurs when areas overlap, as shown by figure \ref{fig:ex3}. When the
  290. white rectangle is rotated, the gray square should keep its current
  291. orientation. This means that events that are used for rotation of the white
  292. square, should not be used for rotation of the gray square. To achieve
  293. this, there must be some communication between the rotation detection
  294. components of the two squares.
  295. \examplefigurethree
  296. a
  297. --------
  298. % simpelste aanpak is een lijst van area's, als event erin past dan
  299. % delegeren. probleem (aangeven met voorbeeld van geneste widgets die
  300. % allebei naar tap luisteren): als area's overlappen wil je bepaalde events
  301. % reserveren voor bepaalde stukjes detection logic
  302. % oplossing: area'a opslaan in boomstructuur en event propagatie gebruiken
  303. % -> area binnenin een parent area kan events propageren naar die parent,
  304. % detection logic kan propagatie tegenhouden. om omhoog in de boom te
  305. % propageren moet het event eerst bij de leaf aankomen, dus eerst delegatie
  306. % tot laagste leaf node die het event bevat.
  307. % speciaal geval: overlappende area's in dezelfde laag v/d boom. in dat
  308. % geval: area die later is toegevoegd (rechter sibling) wordt aangenomen
  309. % bovenop de sibling links ervan te liggen en krijgt dus eerst het event.
  310. % Als propagatie in bovenste (rechter) area wordt gestopt, krijgt de
  311. % achterste (linker) sibling deze ook niet meer
  312. % bijkomend voordeel van boomstructuur: makkelijk te integreren in bijv GTK
  313. % die voor widgets een boomstructuur gebruikt -> voor elke widget die touch
  314. % events heeft een area aanmaken
  315. %For example, a button tap\footnote{A ``tap'' gesture is triggered when a
  316. %touch object releases a touch surface within a certain time and distance
  317. %from the point where it initially touched the surface.} should only occur
  318. %on the button itself, and not in any other area of the screen. A solution
  319. %to this problem is the use of \emph{widgets}. The button from the example
  320. %can be represented as a rectangular widget with a position and size. The
  321. %position and size are compared with event coordinates to determine whether
  322. %an event should occur within the button.
  323. \subsection*{Area tree}
  324. A problem occurs when widgets overlap. If a button in placed over a
  325. container and an event occurs occurs inside the button, should the
  326. button handle the event first? And, should the container receive the
  327. event at all or should it be reserved for the button?.
  328. The solution to this problem is to save widgets in a tree structure.
  329. There is one root widget, whose size is limited by the size of the
  330. touch screen. Being the leaf widget, and thus the widget that is
  331. actually touched when an object touches the device, the button widget
  332. should receive an event before its container does. However, events
  333. occur on a screen-wide level and thus at the root level of the widget
  334. tree. Therefore, an event is delegated in the tree before any analysis
  335. is performed. Delegation stops at the ``lowest'' widget in the three
  336. containing the event coordinates. That widget then performs some
  337. analysis of the event, after which the event is released back to the
  338. parent widget for analysis. This release of an event to a parent widget
  339. is called \emph{propagation}. To be able to reserve an event to some
  340. widget or analysis, the propagation of an event can be stopped during
  341. analysis.
  342. % TODO: inspired by JavaScript DOM
  343. Many GUI frameworks, like GTK \cite{GTK}, also use a tree structure to
  344. manage their widgets. This makes it easy to connect the architecture to
  345. such a framework. For example, the programmer can define a
  346. \texttt{GtkTouchWidget} that synchronises the position of a touch
  347. widget with that of a GTK widget, using GTK signals.
  348. \section{Detecting gestures from events}
  349. \label{sec:gesture-detection}
  350. The events that are grouped by areas must be translated to complex gestures
  351. in some way. This analysis is specific to the type of gesture being
  352. detected. E.g. the detection of a ``tap'' gesture is very different from
  353. detection of a ``rotate'' gesture. The architecture has adopted the
  354. \emph{gesture tracker}-based design described by \cite{win7touch}, which
  355. separates the detection of different gestures into different \emph{gesture
  356. trackers}. This keeps the different pieces of gesture detection code
  357. manageable and extendable. A single gesture tracker detects a specific set
  358. of gesture types, given a set of primitive events. An example of a possible
  359. gesture tracker implementation is a ``transformation tracker'' that detects
  360. rotation, scaling and translation gestures.
  361. % TODO: een formele definitie van gestures zou wellicht beter zijn, maar
  362. % wordt niet gegeven in deze thesis (wel besproken in future work)
  363. \subsection*{Assignment of a gesture tracker to an area}
  364. As explained in section \ref{sec:callbacks}, events are delegated from
  365. a widget to some event analysis. The analysis component of a widget
  366. consists of a list of gesture trackers, each tracking a specific set of
  367. gestures. No two trackers in the list should be tracking the same
  368. gesture type.
  369. When a handler for a gesture is ``bound'' to a widget, the widget
  370. asserts that it has a tracker that is tracking this gesture. Thus, the
  371. programmer does not create gesture trackers manually. Figure
  372. \ref{fig:trackerdiagram} shows the position of gesture trackers in the
  373. architecture.
  374. \trackerdiagram{Extension of the diagram from figure
  375. \ref{fig:widgetdiagram}, showing the position of gesture trackers in
  376. the architecture.}
  377. \section{Serving multiple applications}
  378. % TODO
  379. \section{Example usage}
  380. \label{sec:example}
  381. This section describes an example that illustrates the API of the
  382. architecture. The example application listens to tap events on a button.
  383. The button is located inside an application window, which can be resized
  384. using pinch gestures.
  385. % TODO: comments weg, in pseudocode opschrijven, uitbreiden met draggable
  386. % circle en illustrerende figuur
  387. \begin{verbatim}
  388. initialize GUI, creating a window
  389. # Add widgets representing the application window and button
  390. rootwidget = new rectangular Widget object
  391. set rootwidget position and size to that of the application window
  392. buttonwidget = new rectangular Widget object
  393. set buttonwidget position and size to that of the GUI button
  394. # Create an event server that will be started later
  395. server = new EventServer object
  396. set rootwidget as root widget for server
  397. # Define handlers and bind them to corresponding widgets
  398. begin function resize_handler(gesture)
  399. resize GUI window
  400. update position and size of root wigdet
  401. end function
  402. begin function tap_handler_handler(gesture)
  403. # Perform some action that the button is meant to do
  404. end function
  405. bind ('pinch', resize_handler) to rootwidget
  406. bind ('tap', tap_handler) to buttonwidget
  407. # Start event server (which in turn starts a driver-specific event server)
  408. start server
  409. \end{verbatim}
  410. \examplediagram{Diagram representation of the example above. Dotted arrows
  411. represent gestures, normal arrows represent events (unless labeled
  412. otherwise).}
  413. \chapter{Test applications}
  414. \section{Reference implementation in Python}
  415. \label{sec:implementation}
  416. % TODO
  417. % alleen window.contains op point down, niet move/up
  418. % een paar simpele windows en trackers
  419. To test multi-touch interaction properly, a multi-touch device is required. The
  420. University of Amsterdam (UvA) has provided access to a multi-touch table from
  421. PQlabs. The table uses the TUIO protocol \cite{TUIO} to communicate touch
  422. events. See appendix \ref{app:tuio} for details regarding the TUIO protocol.
  423. The reference implementation is a Proof of Concept that translates TUIO
  424. messages to some simple touch gestures (see appendix \ref{app:implementation}
  425. for details).
  426. % omdat we alleen deze tafel hebben kunnen we het concept van de event driver
  427. % alleen met het TUIO protocol testen, en niet vergelijken met andere drivers
  428. % TODO
  429. % testprogramma's met PyGame/Cairo
  430. \chapter{Suggestions for future work}
  431. % TODO
  432. % - network protocol (ZeroMQ) voor meerdere talen en simultane processen
  433. % - gebruik formelere definitie van gestures ipv expliciete detection logic,
  434. % bijv. een state machine
  435. % - volgende stap: maken van een library die meerdere drivers en complexe
  436. % gestures bevat
  437. % - "event filter" ipv "area"
  438. \section{A generic way for grouping events}
  439. \label{sec:eventfilter}
  440. \bibliographystyle{plain}
  441. \bibliography{report}{}
  442. \appendix
  443. \chapter{The TUIO protocol}
  444. \label{app:tuio}
  445. The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible
  446. objects, such as fingers or objects on a multi-touch table. Object information
  447. is sent to the TUIO UDP port (3333 by default).
  448. For efficiency reasons, the TUIO protocol is encoded using the Open Sound
  449. Control \cite[OSC]{OSC} format. An OSC server/client implementation is
  450. available for Python: pyOSC \cite{pyOSC}.
  451. A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}.
  452. However, the execution of an example script yields an error regarding Python's
  453. built-in \texttt{socket} library. Therefore, the reference implementation uses
  454. the pyOSC package to receive TUIO messages.
  455. The two most important message types of the protocol are ALIVE and SET
  456. messages. An ALIVE message contains the list of session id's that are currently
  457. ``active'', which in the case of multi-touch a table means that they are
  458. touching the screen. A SET message provides geometric information of a session
  459. id, such as position, velocity and acceleration.
  460. Each session id represents an object. The only type of objects on the
  461. multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y)
  462. position on the screen.
  463. ALIVE messages can be used to determine when an object touches and releases the
  464. screen. For example, if a session id was in the previous message but not in the
  465. current, The object it represents has been lifted from the screen.
  466. SET provide information about movement. In the case of simple (x, y) positions,
  467. only the movement vector of the position itself can be calculated. For more
  468. complex objects such as fiducials, arguments like rotational position and
  469. acceleration are also included.
  470. ALIVE and SET messages can be combined to create ``point down'', ``point move''
  471. and ``point up'' events (as used by the Windows 7 implementation
  472. \cite{win7touch}).
  473. TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left
  474. top corner of the screen and $(1.0, 1.0)$ the right bottom corner. To focus
  475. events within a window, a translation to window coordinates is required in the
  476. client application, as stated by the online specification
  477. \cite{TUIO_specification}:
  478. \begin{quote}
  479. In order to compute the X and Y coordinates for the 2D profiles a TUIO
  480. tracker implementation needs to divide these values by the actual sensor
  481. dimension, while a TUIO client implementation consequently can scale these
  482. values back to the actual screen dimension.
  483. \end{quote}
  484. \chapter{Experimental program}
  485. \label{app:experiment}
  486. % TODO: rewrite intro
  487. When designing a software library, its API should be understandable and easy to
  488. use for programmers. To find out the basic requirements of the API to be
  489. usable, an experimental program has been written based on the Processing code
  490. from \cite{processingMT}. The program receives TUIO events and translates them
  491. to point \emph{down}, \emph{move} and \emph{up} events. These events are then
  492. interpreted to be (double or single) \emph{tap}, \emph{rotation} or
  493. \emph{pinch} gestures. A simple drawing program then draws the current state to
  494. the screen using the PyGame library. The output of the program can be seen in
  495. figure \ref{fig:draw}.
  496. \begin{figure}[H]
  497. \center
  498. \label{fig:draw}
  499. \includegraphics[scale=0.4]{data/experimental_draw.png}
  500. \caption{Output of the experimental drawing program. It draws the touch
  501. points and their centroid on the screen (the centroid is used as center
  502. point for rotation and pinch detection). It also draws a green
  503. rectangle which responds to rotation and pinch events.}
  504. \end{figure}
  505. One of the first observations is the fact that TUIO's \texttt{SET} messages use
  506. the TUIO coordinate system, as described in appendix \ref{app:tuio}. The test
  507. program multiplies these with its own dimensions, thus showing the entire
  508. screen in its window. Also, the implementation only works using the TUIO
  509. protocol. Other drivers are not supported.
  510. Though using relatively simple math, the rotation and pinch events work
  511. surprisingly well. Both rotation and pinch use the centroid of all touch
  512. points. A \emph{rotation} gesture uses the difference in angle relative to the
  513. centroid of all touch points, and \emph{pinch} uses the difference in distance.
  514. Both values are normalized using division by the number of touch points. A
  515. pinch event contains a scale factor, and therefore uses a division of the
  516. current by the previous average distance to the centroid.
  517. There is a flaw in this implementation. Since the centroid is calculated using
  518. all current touch points, there cannot be two or more rotation or pinch
  519. gestures simultaneously. On a large multi-touch table, it is desirable to
  520. support interaction with multiple hands, or multiple persons, at the same time.
  521. This kind of application-specific requirements should be defined in the
  522. application itself, whereas the experimental implementation defines detection
  523. algorithms based on its test program.
  524. Also, the different detection algorithms are all implemented in the same file,
  525. making it complex to read or debug, and difficult to extend.
  526. \end{document}