report.tex 29 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577
  1. \documentclass[twoside,openright]{uva-bachelor-thesis}
  2. \usepackage[english]{babel}
  3. \usepackage[utf8]{inputenc}
  4. \usepackage{hyperref,graphicx,float,tikz}
  5. % Link colors
  6. \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen}
  7. % Title Page
  8. \title{A generic architecture for gesture-based interaction}
  9. \author{Taddeüs Kroes}
  10. \supervisors{Dr. Robert G. Belleman (UvA)}
  11. \signedby{Dr. Robert G. Belleman (UvA)}
  12. \begin{document}
  13. % Title page
  14. \maketitle
  15. \begin{abstract}
  16. % TODO
  17. \end{abstract}
  18. % Set paragraph indentation
  19. \parindent 0pt
  20. \parskip 1.5ex plus 0.5ex minus 0.2ex
  21. % Table of content on separate page
  22. \tableofcontents
  23. \chapter{Introduction}
  24. Surface-touch devices have evolved from pen-based tablets to single-touch
  25. trackpads, to multi-touch devices like smartphones and tablets. Multi-touch
  26. devices enable a user to interact with software using hand gestures, making the
  27. interaction more expressive and intuitive. These gestures are more complex than
  28. primitive ``click'' or ``tap'' events that are used by single-touch devices.
  29. Some examples of more complex gestures are so-called ``pinch''\footnote{A
  30. ``pinch'' gesture is formed by performing a pinching movement with multiple
  31. fingers on a multi-touch surface. Pinch gestures are often used to zoom in or
  32. out on an object.} and ``flick''\footnote{A ``flick'' gesture is the act of
  33. grabbing an object and throwing it in a direction on a touch surface, giving
  34. it momentum to move for some time after the hand releases the surface.}
  35. gestures.
  36. The complexity of gestures is not limited to navigation in smartphones. Some
  37. multi-touch devices are already capable of recognizing objects touching the
  38. screen \cite[Microsoft Surface]{mssurface}. In the near future, touch screens
  39. will possibly be extended or even replaced with in-air interaction (Microsoft's
  40. Kinect \cite{kinect} and the Leap \cite{leap}).
  41. The interaction devices mentioned above generate primitive events. In the case
  42. of surface-touch devices, these are \emph{down}, \emph{move} and \emph{up}
  43. events. Application programmers who want to incorporate complex, intuitive
  44. gestures in their application face the challenge of interpreting these
  45. primitive events as gestures. With the increasing complexity of gestures, the
  46. complexity of the logic required to detect these gestures increases as well.
  47. This challenge limits, or even deters the application developer to use complex
  48. gestures in an application.
  49. The main question in this research project is whether a generic architecture
  50. for the detection of complex interaction gestures can be designed, with the
  51. capability of managing the complexity of gesture detection logic.
  52. Application frameworks for surface-touch devices, such as Nokia's Qt \cite{qt},
  53. include the detection of commonly used gestures like \emph{pinch} gestures.
  54. However, this detection logic is dependent on the application framework.
  55. Consequently, an application developer who wants to use multi-touch interaction
  56. in an application is forced to choose an application framework that includes
  57. support for multi-touch gestures. Therefore, a requirement of the generic
  58. architecture is that it must not be bound to a specific application framework.
  59. Moreover, the set of supported gestures is limited by the application framework
  60. of choice. To incorporate a custom event in an application, the application
  61. developer needs to extend the framework. This requires extensive knowledge of
  62. the framework's architecture. Also, if the same gesture is used in another
  63. application that is based on another framework, the detection logic has to be
  64. translated for use in that framework. Nevertheless, application frameworks are
  65. a necessity when it comes to fast, cross-platform development. Therefore, the
  66. architecture design should aim to be compatible with existing frameworks, but
  67. provide a way to detect and extend gestures independent of the framework.
  68. An application framework is written in a specific programming language. A
  69. generic architecture should not limited to a single programming language. The
  70. ultimate goal of this thesis is to provide support for complex gesture
  71. interaction in any application. Thus, applications should be able to address
  72. the architecture using a language-independent method of communication. This
  73. intention leads towards the concept of a dedicated gesture detection
  74. application that serves gestures to multiple programs at the same time.
  75. The scope of this thesis is limited to the detection of gestures on multi-touch
  76. surface devices. It presents a design for a generic gesture detection
  77. architecture for use in multi-touch based applications. A reference
  78. implementation of this design is used in some test case applications, whose
  79. goal is to test the effectiveness of the design and detect its shortcomings.
  80. % FIXME: Moet deze nog in de introductie?
  81. % How can the input of the architecture be normalized? This is needed, because
  82. % multi-touch drivers use their own specific message format.
  83. \section{Structure of this document}
  84. % TODO: pas als thesis af is
  85. \chapter{Related work}
  86. \section{Gesture and Activity Recognition Toolkit}
  87. The Gesture and Activity Recognition Toolkit (GART) \cite{GART} is a
  88. toolkit for the development of gesture-based applications. The toolkit
  89. states that the best way to classify gestures is to use machine learning.
  90. The programmer trains a program to recognize using the machine learning
  91. library from the toolkit. The toolkit contains a callback mechanism that
  92. the programmer uses to execute custom code when a gesture is recognized.
  93. Though multi-touch input is not directly supported by the toolkit, the
  94. level of abstraction does allow for it to be implemented in the form of a
  95. ``touch'' sensor.
  96. The reason to use machine learning is the statement that gesture detection
  97. ``is likely to become increasingly complex and unmanageable'' when using a
  98. set of predefined rules to detect whether some sensor input can be seen as
  99. a specific gesture. This statement is not necessarily true. If the
  100. programmer is given a way to separate the detection of different types of
  101. gestures and flexibility in rule definitions, over-complexity can be
  102. avoided.
  103. \section{Gesture recognition implementation for Windows 7}
  104. The online article \cite{win7touch} presents a Windows 7 application,
  105. written in Microsofts .NET. The application shows detected gestures in a
  106. canvas. Gesture trackers keep track of stylus locations to detect specific
  107. gestures. The event types required to track a touch stylus are ``stylus
  108. down'', ``stylus move'' and ``stylus up'' events. A
  109. \texttt{GestureTrackerManager} object dispatches these events to gesture
  110. trackers. The application supports a limited number of pre-defined
  111. gestures.
  112. An important observation in this application is that different gestures are
  113. detected by different gesture trackers, thus separating gesture detection
  114. code into maintainable parts. The architecture has adopted this design
  115. feature by also using different gesture trackers to track different gesture
  116. types.
  117. % TODO: This is not really 'related', move it to somewhere else
  118. \section{Processing implementation of simple gestures in Android}
  119. An implementation of a detection architecture for some simple multi-touch
  120. gestures (tap, double tap, rotation, pinch and drag) using
  121. Processing\footnote{Processing is a Java-based development environment with
  122. an export possibility for Android. See also \url{http://processing.org/.}}
  123. can be found in a forum on the Processing website \cite{processingMT}. The
  124. implementation is fairly simple, but it yields some very appealing results.
  125. The detection logic of all gestures is combined in a single class. This
  126. does not allow for extendability, because the complexity of this class
  127. would increase to an undesirable level (as predicted by the GART article
  128. \cite{GART}). However, the detection logic itself is partially re-used in
  129. the reference implementation of the generic gesture detection architecture.
  130. \section{Analysis of related work}
  131. The simple Processing implementation of multi-touch events provides most of
  132. the functionality that can be found in existing multi-touch applications.
  133. In fact, many applications for mobile phones and tablets only use tap and
  134. scroll events. For this category of applications, using machine learning
  135. seems excessive. Though the representation of a gesture using a feature
  136. vector in a machine learning algorithm is a generic and formal way to
  137. define a gesture, a programmer-friendly architecture should also support
  138. simple, ``hard-coded'' detection code. A way to separate different pieces
  139. of gesture detection code, thus keeping a code library manageable and
  140. extendable, is to user different gesture trackers.
  141. % FIXME: change title below
  142. \chapter{Design}
  143. \label{chapter:design}
  144. % Diagrams are defined in a separate file
  145. \input{data/diagrams}
  146. \section{Introduction}
  147. This chapter describes the realization of a design for the generic
  148. multi-touch gesture detection architecture. The chapter represents the
  149. architecture as a diagram of relations between different components.
  150. Sections \ref{sec:driver-support} to \ref{sec:event-analysis} define
  151. requirements for the architecture, and extend the diagram with components
  152. that meet these requirements. Section \ref{sec:example} describes an
  153. example usage of the architecture in an application.
  154. \subsection*{Position of architecture in software}
  155. The input of the architecture comes from a multi-touch device driver.
  156. The task of the architecture is to translate this input to multi-touch
  157. gestures that are used by an application, as illustrated in figure
  158. \ref{fig:basicdiagram}. In the course of this chapter, the diagram is
  159. extended with the different components of the architecture.
  160. \basicdiagram{A diagram showing the position of the architecture
  161. relative to the device driver and a multi-touch application. The input
  162. of the architecture is given by a touch device driver. This output is
  163. translated to complex interaction gestures and passed to the
  164. application that is using the architecture.}
  165. \section{Supporting multiple drivers}
  166. \label{sec:driver-support}
  167. The TUIO protocol \cite{TUIO} is an example of a touch driver that can be
  168. used by multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate
  169. low-level touch events (see appendix \ref{app:tuio} for more details).
  170. These messages are specific to the API of the TUIO protocol. Other touch
  171. drivers may use very different messages types. To support more than
  172. one driver in the architecture, there must be some translation from
  173. driver-specific messages to a common format for primitive touch events.
  174. After all, the gesture detection logic in a ``generic'' architecture should
  175. not be implemented based on driver-specific messages. The event types in
  176. this format should be chosen so that multiple drivers can trigger the same
  177. events. If each supported driver adds its own set of event types to the
  178. common format, it the purpose of being ``common'' would be defeated.
  179. A reasonable expectation for a touch device driver is that it detects
  180. simple touch points, with a ``point'' being an object at an $(x, y)$
  181. position on the touch surface. This yields a basic set of events:
  182. $\{point\_down, point\_move, point\_up\}$.
  183. The TUIO protocol supports fiducials\footnote{A fiducial is a pattern used
  184. by some touch devices to identify objects.}, which also have a rotational
  185. property. This results in a more extended set: $\{point\_down, point\_move,
  186. point\_up, object\_down, object\_move, object\_up,\\ object\_rotate\}$.
  187. Due to their generic nature, the use of these events is not limited to the
  188. TUIO protocol. Another driver that can keep apart rotated objects from
  189. simple touch points could also trigger them.
  190. The component that translates driver-specific messages to common events,
  191. will be called the \emph{event driver}. The event driver runs in a loop,
  192. receiving and analyzing driver messages. When a sequence of messages is
  193. analyzed as an event, the event driver delegates the event to other
  194. components in the architecture for translation to gestures. This
  195. communication flow is illustrated in figure \ref{fig:driverdiagram}.
  196. A touch device driver can be supported by adding an event driver
  197. implementation for it. The event driver implementation that is used in an
  198. application is dependent of the support of the touch device.
  199. \driverdiagram{Extension of the diagram from figure \ref{fig:basicdiagram},
  200. showing the position of the event driver in the architecture. The event
  201. driver translates driver-specific to a common set of events, which are
  202. delegated to analysis components that will interpret them as more complex
  203. gestures.}
  204. \section{Restricting gestures to a screen area}
  205. % TODO: in introduction: gestures zijn opgebouwd uit meerdere primitieven
  206. Touch input devices are unaware of the graphical input widgets rendered on
  207. screen and therefore generate events that simply identify the screen
  208. location at which an event takes place. In order to be able to direct a
  209. gesture to a particular widget on screen, an application programmer must
  210. restrict the occurrence of a gesture to the area of the screen covered by
  211. that widget. An important question is if the architecture should offer a
  212. solution to this problem, or leave it to the programmer to assign gestures
  213. to a widget.
  214. % TODO: eerst: aan developer overlaten, verwijzen naar vorige diagram dan:
  215. % consider the following example: ... twee vierkantjes die allebei naar
  216. % rotatie luisteren (figuur ter illustratie): als je ze tegelijk roteert
  217. % treedt er maar één globaal event op. Dus: niet gestures beperken tot een
  218. % area, maar events. dan kun je op elk vierkant een aparte detection logic
  219. % zetten met als input de events op die locatie oftewel: je kan het niet
  220. % aan de developer overlaten omdat de input van de detection logic moet
  221. % veranderen (heeft developer geen invloed op) dus conclusie: Je moet
  222. % events kunnen beperken tot een "area" van het scherm. op dit moment kan
  223. % de diagram dus al worden uitgebreid
  224. % dan: simpelste aanpak is een lijst van area's, als event erin past dan
  225. % delegeren. probleem (aangeven met voorbeeld van geneste widgets die
  226. % allebei naar tap luisteren): als area's overlappen wil je bepaalde events
  227. % reserveren voor bepaalde stukjes detection logic
  228. % oplossing: area'a opslaan in boomstructuur en event propagatie gebruiken
  229. % -> area binnenin een parent area kan events propageren naar die parent,
  230. % detection logic kan propagatie tegenhouden. om omhoog in de boom te
  231. % propageren moet het event eerst bij de leaf aankomen, dus eerst delegatie
  232. % tot laagste leaf node die het event bevat.
  233. % speciaal geval: overlappende area's in dezelfde laag v/d boom. in dat
  234. % geval: area die later is toegevoegd (rechter sibling) wordt aangenomen
  235. % bovenop de sibling links ervan te liggen en krijgt dus eerst het event.
  236. % Als propagatie in bovenste (rechter) area wordt gestopt, krijgt de
  237. % achterste (linker) sibling deze ook niet meer
  238. % bijkomend voordeel van boomstructuur: makkelijk te integreren in bijv GTK
  239. % die voor widgets een boomstructuur gebruikt -> voor elke widget die touch
  240. % events heeft een area aanmaken
  241. Gestures are composed of primitive events using detection logic. If a
  242. particular gesture should only occur within some area of the screen, it
  243. should be composed of only events that occur within that area Events that
  244. occur outside the area are not likely to be relevant to the . In other
  245. words, the gesture detection logic is affected by the area in which the
  246. gestures should be detected. Since the detection logic is part of the
  247. architecture, the architecture must be able to restrict the set of events
  248. to that are delegated to the particular piece of detection logic for the
  249. gesture being detected in the area.
  250. For example, a button tap\footnote{A ``tap'' gesture is triggered when a
  251. touch object releases a touch surface within a certain time and distance
  252. from the point where it initially touched the surface.} should only occur
  253. on the button itself, and not in any other area of the screen. A solution
  254. to this problem is the use of \emph{widgets}. The button from the example
  255. can be represented as a rectangular widget with a position and size. The
  256. position and size are compared with event coordinates to determine whether
  257. an event should occur within the button.
  258. \subsection*{Callbacks}
  259. \label{sec:callbacks}
  260. When an event is propagated by a widget, it is first used for event
  261. analysis on that widget. The event analysis can then trigger a gesture
  262. in the widget, which has to be handled by the application. To handle a
  263. gesture, the widget should provide a callback mechanism: the
  264. application binds a handler for a specific type of gesture to a widget.
  265. When a gesture of that type is triggered after event analysis, the
  266. widget triggers the callback.
  267. \subsection*{Widget tree}
  268. A problem occurs when widgets overlap. If a button in placed over a
  269. container and an event occurs occurs inside the button, should the
  270. button handle the event first? And, should the container receive the
  271. event at all or should it be reserved for the button?.
  272. The solution to this problem is to save widgets in a tree structure.
  273. There is one root widget, whose size is limited by the size of the
  274. touch screen. Being the leaf widget, and thus the widget that is
  275. actually touched when an object touches the device, the button widget
  276. should receive an event before its container does. However, events
  277. occur on a screen-wide level and thus at the root level of the widget
  278. tree. Therefore, an event is delegated in the tree before any analysis
  279. is performed. Delegation stops at the ``lowest'' widget in the three
  280. containing the event coordinates. That widget then performs some
  281. analysis of the event, after which the event is released back to the
  282. parent widget for analysis. This release of an event to a parent widget
  283. is called \emph{propagation}. To be able to reserve an event to some
  284. widget or analysis, the propagation of an event can be stopped during
  285. analysis.
  286. % TODO: inspired by JavaScript DOM
  287. Many GUI frameworks, like GTK \cite{GTK}, also use a tree structure to
  288. manage their widgets. This makes it easy to connect the architecture to
  289. such a framework. For example, the programmer can define a
  290. \texttt{GtkTouchWidget} that synchronises the position of a touch
  291. widget with that of a GTK widget, using GTK signals.
  292. \subsection*{Position of widget tree in architecture}
  293. \widgetdiagram{Extension of the diagram from figure
  294. \ref{fig:driverdiagram}, showing the position of widgets in the
  295. architecture.}
  296. \section{Event analysis}
  297. \label{sec:event-analysis}
  298. % TODO: essentie moet zijn dat gesture trackers detection logic opdelen in
  299. % behapbare stukken, en worden toegewezen aan een enkele area waardoor er
  300. % meerdere trackers tegelijk kunnen draaien op verschillende delen v/h
  301. % scherm. een formele definitie van gestures zou wellicht beter zijn, maar
  302. % wordt niet gegeven in deze thesis (wel besproken in future work)
  303. The events that are delegated to widgets must be analyzed in some way to
  304. gestures. This analysis is specific to the type of gesture being detected.
  305. E.g. the detection of a ``tap'' gesture is very different from detection of
  306. a ``rotate'' gesture. The implementation described in \cite{win7touch}
  307. separates the detection of different gestures into different \emph{gesture
  308. trackers}. This keeps the different pieces of detection code managable and
  309. extandable. Therefore, the architecture also uses gesture trackers to
  310. separate the analysis of events. A single gesture tracker detects a
  311. specific set of gesture types, given a sequence of events. An example of a
  312. possible gesture tracker implementation is a ``transformation tracker''
  313. that detects rotation, scaling and translation gestures.
  314. \subsection*{Assignment of a gesture tracker to a widget}
  315. As explained in section \ref{sec:callbacks}, events are delegated from
  316. a widget to some event analysis. The analysis component of a widget
  317. consists of a list of gesture trackers, each tracking a specific set of
  318. gestures. No two trackers in the list should be tracking the same
  319. gesture type.
  320. When a handler for a gesture is ``bound'' to a widget, the widget
  321. asserts that it has a tracker that is tracking this gesture. Thus, the
  322. programmer does not create gesture trackers manually. Figure
  323. \ref{fig:trackerdiagram} shows the position of gesture trackers in the
  324. architecture.
  325. \trackerdiagram{Extension of the diagram from figure
  326. \ref{fig:widgetdiagram}, showing the position of gesture trackers in
  327. the architecture.}
  328. \section{Serving multiple applications}
  329. % TODO
  330. \section{Example usage}
  331. \label{sec:example}
  332. This section describes an example that illustrates the API of the
  333. architecture. The example application listens to tap events on a button.
  334. The button is located inside an application window, which can be resized
  335. using pinch gestures.
  336. % TODO: comments weg, in pseudocode opschrijven
  337. \begin{verbatim}
  338. initialize GUI, creating a window
  339. # Add widgets representing the application window and button
  340. rootwidget = new rectangular Widget object
  341. set rootwidget position and size to that of the application window
  342. buttonwidget = new rectangular Widget object
  343. set buttonwidget position and size to that of the GUI button
  344. # Create an event server that will be started later
  345. server = new EventServer object
  346. set rootwidget as root widget for server
  347. # Define handlers and bind them to corresponding widgets
  348. begin function resize_handler(gesture)
  349. resize GUI window
  350. update position and size of root wigdet
  351. end function
  352. begin function tap_handler_handler(gesture)
  353. # Perform some action that the button is meant to do
  354. end function
  355. bind ('pinch', resize_handler) to rootwidget
  356. bind ('tap', tap_handler) to buttonwidget
  357. # Start event server (which in turn starts a driver-specific event server)
  358. start server
  359. \end{verbatim}
  360. \examplediagram{Diagram representation of the example above. Dotted arrows
  361. represent gestures, normal arrows represent events (unless labeled
  362. otherwise).}
  363. \chapter{Test applications}
  364. To test multi-touch interaction properly, a multi-touch device is required. The
  365. University of Amsterdam (UvA) has provided access to a multi-touch table from
  366. PQlabs. The table uses the TUIO protocol \cite{TUIO} to communicate touch
  367. events. See appendix \ref{app:tuio} for details regarding the TUIO protocol.
  368. The reference implementation is a Proof of Concept that translates TUIO
  369. messages to some simple touch gestures (see appendix \ref{app:implementation}
  370. for details).
  371. % TODO
  372. % testprogramma's met PyGame/Cairo
  373. \chapter{Suggestions for future work}
  374. % TODO
  375. % - network protocol (ZeroMQ) voor meerdere talen en simultane processen
  376. % - gebruik formelere definitie van gestures ipv expliciete detection logic,
  377. % bijv. een state machine
  378. % - volgende stap: maken van een library die meerdere drivers en complexe
  379. % gestures bevat
  380. \bibliographystyle{plain}
  381. \bibliography{report}{}
  382. \appendix
  383. \chapter{The TUIO protocol}
  384. \label{app:tuio}
  385. The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible
  386. objects, such as fingers or objects on a multi-touch table. Object information
  387. is sent to the TUIO UDP port (3333 by default).
  388. For efficiency reasons, the TUIO protocol is encoded using the Open Sound
  389. Control \cite[OSC]{OSC} format. An OSC server/client implementation is
  390. available for Python: pyOSC \cite{pyOSC}.
  391. A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}.
  392. However, the execution of an example script yields an error regarding Python's
  393. built-in \texttt{socket} library. Therefore, the reference implementation uses
  394. the pyOSC package to receive TUIO messages.
  395. The two most important message types of the protocol are ALIVE and SET
  396. messages. An ALIVE message contains the list of session id's that are currently
  397. ``active'', which in the case of multi-touch a table means that they are
  398. touching the screen. A SET message provides geometric information of a session
  399. id, such as position, velocity and acceleration.
  400. Each session id represents an object. The only type of objects on the
  401. multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y)
  402. position on the screen.
  403. ALIVE messages can be used to determine when an object touches and releases the
  404. screen. For example, if a session id was in the previous message but not in the
  405. current, The object it represents has been lifted from the screen.
  406. SET provide information about movement. In the case of simple (x, y) positions,
  407. only the movement vector of the position itself can be calculated. For more
  408. complex objects such as fiducials, arguments like rotational position and
  409. acceleration are also included.
  410. ALIVE and SET messages can be combined to create ``point down'', ``point move''
  411. and ``point up'' events (as used by the Windows 7 implementation
  412. \cite{win7touch}).
  413. TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left
  414. top corner of the screen and $(1.0, 1.0)$ the right bottom corner. To focus
  415. events within a window, a translation to window coordinates is required in the
  416. client application, as stated by the online specification
  417. \cite{TUIO_specification}:
  418. \begin{quote}
  419. In order to compute the X and Y coordinates for the 2D profiles a TUIO
  420. tracker implementation needs to divide these values by the actual sensor
  421. dimension, while a TUIO client implementation consequently can scale these
  422. values back to the actual screen dimension.
  423. \end{quote}
  424. \chapter{Experimental program}
  425. \label{app:experiment}
  426. % TODO: rewrite intro
  427. When designing a software library, its API should be understandable and easy to
  428. use for programmers. To find out the basic requirements of the API to be
  429. usable, an experimental program has been written based on the Processing code
  430. from \cite{processingMT}. The program receives TUIO events and translates them
  431. to point \emph{down}, \emph{move} and \emph{up} events. These events are then
  432. interpreted to be (double or single) \emph{tap}, \emph{rotation} or
  433. \emph{pinch} gestures. A simple drawing program then draws the current state to
  434. the screen using the PyGame library. The output of the program can be seen in
  435. figure \ref{fig:draw}.
  436. \begin{figure}[H]
  437. \center
  438. \label{fig:draw}
  439. \includegraphics[scale=0.4]{data/experimental_draw.png}
  440. \caption{Output of the experimental drawing program. It draws the touch
  441. points and their centroid on the screen (the centroid is used as center
  442. point for rotation and pinch detection). It also draws a green
  443. rectangle which responds to rotation and pinch events.}
  444. \end{figure}
  445. One of the first observations is the fact that TUIO's \texttt{SET} messages use
  446. the TUIO coordinate system, as described in appendix \ref{app:tuio}. The test
  447. program multiplies these with its own dimensions, thus showing the entire
  448. screen in its window. Also, the implementation only works using the TUIO
  449. protocol. Other drivers are not supported.
  450. Though using relatively simple math, the rotation and pinch events work
  451. surprisingly well. Both rotation and pinch use the centroid of all touch
  452. points. A \emph{rotation} gesture uses the difference in angle relative to the
  453. centroid of all touch points, and \emph{pinch} uses the difference in distance.
  454. Both values are normalized using division by the number of touch points. A
  455. pinch event contains a scale factor, and therefore uses a division of the
  456. current by the previous average distance to the centroid.
  457. There is a flaw in this implementation. Since the centroid is calculated using
  458. all current touch points, there cannot be two or more rotation or pinch
  459. gestures simultaneously. On a large multi-touch table, it is desirable to
  460. support interaction with multiple hands, or multiple persons, at the same time.
  461. This kind of application-specific requirements should be defined in the
  462. application itself, whereas the experimental implementation defines detection
  463. algorithms based on its test program.
  464. Also, the different detection algorithms are all implemented in the same file,
  465. making it complex to read or debug, and difficult to extend.
  466. \chapter{Reference implementation in Python}
  467. \label{app:implementation}
  468. % TODO
  469. % alleen window.contains op point down, niet move/up
  470. % een paar simpele windows en trackers
  471. \end{document}