report.tex 32 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699
  1. \documentclass[twoside,openright]{uva-bachelor-thesis}
  2. \usepackage[english]{babel}
  3. \usepackage[utf8]{inputenc}
  4. \usepackage{hyperref,graphicx,float,tikz,subfigure}
  5. % Link colors
  6. \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen}
  7. % Title Page
  8. \title{A generic architecture for the detection of multi-touch gestures}
  9. \author{Taddeüs Kroes}
  10. \supervisors{Dr. Robert G. Belleman (UvA)}
  11. \signedby{Dr. Robert G. Belleman (UvA)}
  12. \begin{document}
  13. % Title page
  14. \maketitle
  15. \begin{abstract}
  16. % TODO
  17. \end{abstract}
  18. % Set paragraph indentation
  19. \parindent 0pt
  20. \parskip 1.5ex plus 0.5ex minus 0.2ex
  21. % Table of contant on separate page
  22. \tableofcontents
  23. \chapter{Introduction}
  24. % Ruwe probleemstelling
  25. Multi-touch interaction is becoming increasingly common, mostly due to the wide
  26. use of touch screens in phones and tablets. When programming applications using
  27. this method of interaction, the programmer needs an abstraction of the raw data
  28. provided by the touch driver of the device. This abstraction exists in several
  29. multi-touch application frameworks like Nokia's
  30. Qt\footnote{\url{http://qt.nokia.com/}}. However, applications that do not use
  31. these frameworks have no access to their multi-touch events.
  32. % Aanleiding
  33. This problem was observed during an attempt to create a multi-touch
  34. ``interactor'' class for the Visualization Toolkit \cite[VTK]{VTK}. Because VTK
  35. provides the application framework here, it is undesirable to use an entire
  36. framework like Qt simultaneously only for its multi-touch support.
  37. % Ruw doel
  38. The goal of this project is to define a generic multi-touch event triggering
  39. architecture. To test the definition, a reference implementation is written in
  40. Python.
  41. \section{Definition of the problem}
  42. % Hoofdvraag
  43. The goal of this thesis is to a create generic architecture for a
  44. multi-touch event triggering mechanism for use in multi-touch applications.
  45. % Deelvragen
  46. To design such an architecture properly, the following questions are relevant:
  47. \begin{itemize}
  48. \item What is the input of the architecture? Different touch drivers
  49. have different API's. To be able to support different drivers
  50. (which is highly desirable), there should be a translation from the
  51. driver API to a fixed input format.
  52. \item How can extendability be accomplished? The set of supported
  53. events should not be limited to a single implementation, but an
  54. application should be able to define its own custom events.
  55. \item How can the architecture be used by different programming
  56. languages? A generic architecture should not be limited to be used
  57. in only one language.
  58. \item Can events be used by multiple processes at the same time? For
  59. example, a network implementation could run as a service instead of
  60. within a single application, triggering events in any application
  61. that needs them.
  62. \end{itemize}
  63. % Afbakening
  64. The scope of this thesis includes the design of a generic multi-touch
  65. triggering architecture, a reference implementation of this design, and its
  66. integration into a test case application. To be successful, the design
  67. should allow for extensions to be added to any implementation.
  68. The reference implementation is a Proof of Concept that translates TUIO
  69. messages to some simple touch gestures that are used by some test
  70. applications.
  71. %Being a Proof of Concept, the reference implementation itself does not
  72. %necessarily need to meet all the requirements of the design.
  73. \section{Structure of this document}
  74. % TODO: pas als thesis af is
  75. \chapter{Related work}
  76. \section{Gesture and Activity Recognition Toolkit}
  77. The Gesture and Activity Recognition Toolkit (GART) \cite{GART} is a
  78. toolkit for the development of gesture-based applications. The toolkit
  79. states that the best way to classify gestures is to use machine learning.
  80. The programmer trains a program to recognize using the machine learning
  81. library from the toolkit. The toolkit contains a callback-mechanism that
  82. the programmer uses to execute custom code when a gesture is recognized.
  83. Though multi-touch input is not directly supported by the toolkit, the
  84. level of abstraction does allow for it to be implemented in the form of a
  85. ``touch'' sensor.
  86. The reason to use machine learning is the statement that gesture detection
  87. ``is likely to become increasingly complex and unmanageable'' when using a
  88. set of predefined rules to detect whether some sensor input can be seen as
  89. a specific gesture. This statement is not necessarily true. If the
  90. programmer is given a way to separate the detection of different types of
  91. gestures and flexibility in rule definitions, over-complexity can be
  92. avoided.
  93. % oplossing: trackers. bijv. TapTracker, TransformationTracker gescheiden
  94. \section{Gesture recognition software for Windows 7}
  95. The online article \cite{win7touch} presents a Windows 7 application,
  96. written in Microsofts .NET. The application shows detected gestures in a
  97. canvas. Gesture trackers keep track of stylus locations to detect specific
  98. gestures. The event types required to track a touch stylus are ``stylus
  99. down'', ``stylus move'' and ``stylus up'' events. A
  100. \texttt{GestureTrackerManager} object dispatches these events to gesture
  101. trackers. The application supports a limited number of pre-defined
  102. gestures.
  103. An important observation in this application is that different gestures are
  104. detected by different gesture trackers, thus separating gesture detection
  105. code into maintainable parts.
  106. \section{Processing implementation of simple gestures in Android}
  107. An implementation of a detection architecture for some simple multi-touch
  108. gestures (tap, double tap, rotation, pinch and drag) using
  109. Processing\footnote{Processing is a Java-based development environment with
  110. an export possibility for Android. See also \url{http://processing.org/.}}
  111. can be found found in a forum on the Processing website
  112. \cite{processingMT}. The implementation is fairly simple, but it yields
  113. some very appealing results. The detection logic of all gestures is
  114. combined in a single class. This does not allow for extendability, because
  115. the complexity of this class would increase to an undesirable level (as
  116. predicted by the GART article \cite{GART}). However, the detection logic
  117. itself is partially re-used in the reference implementation of the
  118. generic gesture detection architecture.
  119. \section{Analysis of related work}
  120. The simple Processing implementation of multi-touch events provides most of
  121. the functionality that can be found in existing multi-touch applications.
  122. In fact, many applications for mobile phones and tablets only use tap and
  123. scroll events. For this category of applications, using machine learning
  124. seems excessive. Though the representation of a gesture using a feature
  125. vector in a machine learning algorithm is a generic and formal way to
  126. define a gesture, a programmer-friendly architecture should also support
  127. simple, ``hard-coded'' detection code. A way to separate different pieces
  128. of gesture detection code, thus keeping a code library manageable and
  129. extendable, is to user different gesture trackers.
  130. % FIXME: change title below
  131. \chapter{Design - new}
  132. % Diagrams are defined in a separate file
  133. \input{data/diagrams}
  134. \section{Introduction}
  135. % TODO: rewrite intro, reference to experiment appendix
  136. This chapter describes a design for a generic multi-touch gesture detection
  137. architecture. The architecture constists of multiple components, each with
  138. a specific set of tasks. Naturally, the design is based on a number of
  139. requirements. The first three sections each describe a requirement, and a
  140. solution that meets the requirement. The following sections show the
  141. cohesion of the different components in the architecture.
  142. To test multi-touch interaction properly, a multi-touch device is required.
  143. The University of Amsterdam (UvA) has provided access to a multi-touch
  144. table from PQlabs. The table uses the TUIO protocol \cite{TUIO} to
  145. communicate touch events. See appendix \ref{app:tuio} for details regarding
  146. the TUIO protocol.
  147. \subsection*{Position of architecture in software}
  148. The input of the architecture comes from some multi-touch device
  149. driver. For example, the table used in the experiments uses the TUIO
  150. protocol. The task of the architecture is to translate this input to
  151. multi-touch gestures that are used by an application, as illustrated in
  152. figure \ref{fig:basicdiagram}. At the end of this chapter, the diagram
  153. is extended with the different components of the architecture.
  154. \basicdiagram{A diagram showing the position of the architecture
  155. relative to a multi-touch application.}
  156. \section{Supporting multiple drivers}
  157. The TUIO protocol is an example of a touch driver that can be used by
  158. multi-touch devices. Other drivers do exist, which should also be supported
  159. by the architecture. Therefore, there must be some translation of
  160. driver-specific messages to a common format in the arcitecture. Messages in
  161. this common format will be called \emph{events}. Events can be translated
  162. to multi-touch \emph{gestures}. The most basic set of events is
  163. ${point\_down, point\_move, point\_up}$.
  164. A more extended set could also contain more complex events. However, a
  165. object can also have a rotational property, like the ``fiducials'' type in
  166. the TUIO protocol. This results in $\{point\_down, point\_move, point\_up,
  167. object\_down, object\_move, object\_up,\\object\_rotate\}$.
  168. The component that translates driver-specific messages to events, is called
  169. the \emph{event driver}. The event driver runs in a loop, receiving and
  170. analyzing driver messages. The event driver that is used in an application
  171. is dependent of the support of the multi-touch device.
  172. When a sequence of messages is analyzed as an event, the event driver
  173. delegates the event to other components in the architecture for translation
  174. to gestures.
  175. \driverdiagram{Extension of the diagram from figure \ref{fig:basicdiagram},
  176. showing the position of the event driver in the architecture.}
  177. \section{Restricting gestures to a screen area}
  178. An application programmer should be able to bind a gesture handler to some
  179. element on the screen. For example, a button tap\footnote{A ``tap'' gesture
  180. is triggered when a touch object releases the screen within a certain time
  181. and distance from the point where it initially touched the screen.} should
  182. only occur on the button itself, and not in any other area of the screen. A
  183. solution to this program is the use of \emph{widgets}. The button from the
  184. example can be represented as a rectangular widget with a position and
  185. size. The position and size are compared with event coordinates to
  186. determine whether an event should occur within the button.
  187. \subsection*{Widget tree}
  188. A problem occurs when widgets overlap. If a button in placed over a
  189. container and an event occurs occurs inside the button, should the
  190. button handle the event first? And, should the container receive the
  191. event at all or should it be reserved for the button?.
  192. The solution to this problem is to save widgets in a tree structure.
  193. There is one root widget, whose size is limited by the size of the
  194. touch screen. Being the leaf widget, and thus the widget that is
  195. actually touched when an object touches the device, the button widget
  196. should receive an event before its container does. However, events
  197. occur on a screen-wide level and thus at the root level of the widget
  198. tree. Therefore, an event is delegated in the tree before any analysis
  199. is performed. Delegation stops at the ``lowest'' widget in the three
  200. containing the event coordinates. That widget then performs some
  201. analysis of the event, after which the event is released back to the
  202. parent widget for analysis. This release of an event to a parent widget
  203. is called \emph{propagation}. To be able to reserve an event to some
  204. widget or analysis, the propagation of an event can be stopped during
  205. analysis.
  206. % TODO: insprired by JavaScript DOM
  207. % TODO: add GTK to bibliography
  208. Many GUI frameworks, like GTK \cite{GTK}, also use a tree structure to
  209. manage their widgets. This makes it easy to connect the architecture to
  210. such a framework. For example, the programmer can define a
  211. \texttt{GtkTouchWidget} that synchronises the position of a touch
  212. widget with that of a GTK widget, using GTK signals.
  213. \subsection*{Callbacks}
  214. \label{sec:callbacks}
  215. When an event is propagated by a widget, it is first used for event
  216. analysis on that widget. The event analysis can then trigger a gesture
  217. in the widget, which has to be handled by the application. To handle a
  218. gesture, the widget should provide a callback mechanism: the
  219. application binds a handler for a specific type of gesture to a widget.
  220. When a gesture of that type is triggered after event analysis, the
  221. widget triggers the callback.
  222. \subsection*{Position of widget tree in architecture}
  223. \widgetdiagram{Extension of the diagram from figure
  224. \ref{fig:driverdiagram}, showing the position of widgets in the
  225. architecture.}
  226. \section{Event analysis}
  227. The events that are delegated to widgets must be analyzed in some way to
  228. from gestures. This analysis is specific to the type of gesture being
  229. detected. E.g. the detection of a ``tap'' gesture is very different from
  230. detection of a ``rotate'' gesture. The \cite[.NET
  231. implementation]{win7touch} separates the detection of different gestures
  232. into different \emph{gesture trackers}. This keeps the different pieces of
  233. detection code managable and extandable. Therefore, the architecture also
  234. uses gesture trackers to separate the analysis of events. A single gesture
  235. tracker detects a specific set of gesture types, given a sequence of
  236. events. An example of a possible gesture tracker implementation is a
  237. ``transformation tracker'' that detects rotation, scaling and translation
  238. gestures.
  239. \subsection*{Assignment of a gesture tracker to a widget}
  240. As explained in section \ref{sec:callbacks}, events are delegated from
  241. a widget to some event analysis. The analysis component of a widget
  242. consists of a list of gesture trackers, each tracking a specific set of
  243. gestures. No two trackers in the list should be tracking the same
  244. gesture type.
  245. When a handler for a gesture is ``bound'' to a widget, the widget
  246. asserts that it has a tracker that is tracking this gesture. Thus, the
  247. programmer does not create gesture trackers manually. Figure
  248. \ref{fig:trackerdiagram} shows the position of gesture trackers in the
  249. architecture.
  250. \trackerdiagram{Extension of the diagram from figure
  251. \ref{fig:widgetdiagram}, showing the position of gesture trackers in
  252. the architecture.}
  253. \section{Example usage}
  254. % FIXME: Delete the 2 following chapters
  255. \chapter{Experiments}
  256. \label{chapter:requirements}
  257. % testimplementatie met taps, rotatie en pinch. Hieruit bleek:
  258. % - dat er verschillende manieren zijn om bijv. "rotatie" te
  259. % detecteren, (en dat daartussen onderscheid moet kunnen worden
  260. % gemaakt)
  261. % - dat detectie van verschillende soorten gestures moet kunnen
  262. % worden gescheiden, anders wordt het een chaos.
  263. % - Er zijn een aantal keuzes gemaakt bij het ontwerpen van de gestures,
  264. % bijv dat rotatie ALLE vingers gebruikt voor het centroid. Het is
  265. % wellicht in een ander programma nodig om maar 1 hand te gebruiken, en
  266. % dus punten dicht bij elkaar te kiezen (oplossing: windows).
  267. \section{Introduction}
  268. To test multi-touch interaction properly, a multi-touch device is required.
  269. The University of Amsterdam (UvA) has provided access to a multi-touch
  270. table from PQlabs. The table uses the TUIO protocol \cite{TUIO} to
  271. communicate touch events. See appendix \ref{app:tuio} for details regarding
  272. the TUIO protocol.
  273. \section{Summary of observations}
  274. \label{sec:observations}
  275. \begin{itemize}
  276. \item The TUIO protocol uses a distinctive coordinate system and set of
  277. messages.
  278. \item Touch events occur outside of the application window.
  279. \item Gestures that use multiple touch points are using all touch
  280. points (not a subset of them).
  281. \item Code complexity increases when detection algorithms are added.
  282. \item A multi-touch application can have very specific requirements for
  283. gestures.
  284. \end{itemize}
  285. \section{Requirements}
  286. From the observations in section \ref{sec:observations}, a number of
  287. requirements can be specified for the design of the event mechanism:
  288. \begin{itemize}
  289. % vertalen driver-specifieke events naar algemeen formaat
  290. \item To be able to support multiple input drivers, there must be a
  291. translation from driver-specific messages to some common format
  292. that can be used in gesture detection algorithms.
  293. % events toewijzen aan GUI window (windows)
  294. \item An application GUI window should be able to receive only events
  295. occurring within that window, and not outside of it.
  296. % scheiden groepen touchpoints voor verschillende gestures (windows)
  297. \item To support multiple objects that are performing different
  298. gestures at the same time, the architecture must be able to perform
  299. gesture detection on a subset of the active touch points.
  300. % scheiden van detectiecode voor verschillende gesture types
  301. \item To avoid an increase in code complexity when adding new detection
  302. algorithms, detection code of different gesture types must be
  303. separated.
  304. % extendability
  305. \item The architecture should allow for extension with new detection
  306. algorithms to be added to an implementation. This enables a
  307. programmer to define custom gestures for an application.
  308. \end{itemize}
  309. \chapter{Design}
  310. \section{Components}
  311. Based on the requirements from chapter \ref{chapter:requirements}, a design
  312. for the architecture has been created. The design consists of a number
  313. of components, each having a specific set of tasks.
  314. % TODO: Rewrite components, use more diagrams
  315. \subsection{Event server}
  316. % vertaling driver naar point down, move, up
  317. % vertaling naar schermpixelcoordinaten
  318. % TUIO in reference implementation
  319. The \emph{event server} is an abstraction for driver-specific server
  320. implementations, such as a TUIO server. It receives driver-specific
  321. messages and tanslates these to a common set of events and a common
  322. coordinate system.
  323. A minimal example of a common set of events is $\{point\_down,
  324. point\_move, point\_up\}$. This is the set used by the reference
  325. implementation. Respectively, these events represent an object being
  326. placed on the screen, moving along the surface of the screen, and being
  327. released from the screen.
  328. A more extended set could also contain the same three events for an
  329. object touching the screen. However, a object can also have a
  330. rotational property, like the ``fiducials'' type in the TUIO protocol.
  331. This results in $\{point\_down, point\_move, point\_up, object\_down,
  332. object\_move, object\_up,\\object\_rotate\}$.
  333. % TODO: is dit handig? point_down/object_down op 1 of andere manier samenvoegen?
  334. An important note here, is that similar events triggered by different
  335. event servers must have the same event type and parameters. In other
  336. words, the output of the event servers should be determined by the
  337. gesture servers (not the contrary).
  338. The output of an event server implementation should also use a common
  339. coordinate system, that is the coordinate system used by the gesture
  340. server. For example, the reference implementation uses screen
  341. coordinates in pixels, where (0, 0) is the upper left corner and
  342. (\emph{screen width}, \emph{screen height}) the lower right corner of
  343. the screen.
  344. The abstract class definition of the event server should provide some
  345. functionality to detect which driver-specific event server
  346. implementation should be used.
  347. \subsection{Gesture trackers}
  348. Like \cite[the .NET implementation]{win7touch}, the architecture uses a
  349. \emph{gesture tracker} to detect if a sequence of events forms a
  350. particular gesture. A gesture tracker detects and triggers events for a
  351. limited set of gesture types, given a set of touch points. If one group
  352. of touch points is assigned to one tracker and another group to another
  353. tracker, multiple gestures can be detected at the same time. For the
  354. assignment of different groups of touch points to different gesture
  355. trackers, the architecture uses so-called \emph{windows}. These are
  356. described in the next section.
  357. % event binding/triggering
  358. A gesture tracker triggers a gesture event by executing a callback.
  359. Callbacks are ``bound'' to a tracker by the application. Because
  360. multiple gesture types can have very similar detection algorithm, a
  361. tracker can detect multiple different types of gestures. For instance,
  362. the rotation and pinch gestures from the experimental program in
  363. section \ref{sec:experimental-draw} both use the centroid of all touch
  364. points.
  365. If no callback is bound for a particular gesture type, no detection of
  366. that type is needed. A tracker implementation can use this knowledge
  367. for code optimization.
  368. % scheiding algoritmiek
  369. A tracker implementation defines the gesture types it can trigger, and
  370. the detection algorithms to trigger them. Consequently, detection
  371. algorithms can be separated in different trackers. Different
  372. trackers can be saved in different files, reducing the complexity of
  373. the code in a single file. \\
  374. % extendability
  375. Because a tracker defines its own set of gesture types, the application
  376. developer can define application-specific trackers (by extending a base
  377. \texttt{GestureTracker} class, for example). In fact, any built-in
  378. gesture trackers of an implementation are also created this way. This
  379. allows for a plugin-like way of programming, which is very desirable if
  380. someone would want to build a library of gesture trackers. Such a
  381. library can easy be extended by others.
  382. \subsection{Windows}
  383. A \emph{window} represents a subset of the entire screen surface. The
  384. goal of a window is to restrict the detection of certain gestures to
  385. certain areas. A window contains a list of touch points, and a list of
  386. trackers. A gesture server (defined in the next section) assigns touch
  387. points to a window, but the window itself defines functionality to
  388. check whether a touch point is inside the window. This way, new windows
  389. can be defined to fit over any 2D object used by the application.
  390. The first and most obvious use of a window is to restrict touch events
  391. to a single application window. However, the use of windows can be used
  392. in a lot more powerful way.
  393. For example, an application contains an image with a transparent
  394. background that can be dragged around. The user can only drag the image
  395. by touching its foreground. To accomplish this, the application
  396. programmer can define a window type that uses a bitmap to determine
  397. whether a touch point is on the visible image surface. The tracker
  398. which detects drag gestures is then bound to this window, limiting the
  399. occurence of drag events to the image surface.
  400. % toewijzen even aan deel v/h scherm:
  401. % TUIO coördinaten zijn over het hele scherm en van 0.0 tot 1.0, dus
  402. % moeten worden vertaald naar pixelcoördinaten binnen een ``window''
  403. % TODO
  404. \subsection{Gesture server}
  405. % luistert naar point down, move, up
  406. The \emph{gesture server} delegates events from the event server to the
  407. set of windows that contain the touch points related to the events.
  408. % toewijzing point (down) aan window(s)
  409. The gesture server contains a list of windows. When the event server
  410. triggers an event, the gesture server ``asks'' each window whether it
  411. contains the related touch point. If so, the window updates its gesture
  412. trackers, which can then trigger gestures.
  413. \section{Diagrams}
  414. \section{Example usage}
  415. This section describes an example that illustrates the communication
  416. between different components. The example application listens to tap events
  417. in a GUI window.
  418. \begin{verbatim}
  419. # Create a gesture server that will be started later
  420. server = new GestureServer object
  421. # Add a new window to the server, representing the GUI
  422. window = new Window object
  423. set window position and size to that of GUIO window
  424. add window to server
  425. # Define a handler that must be triggered when a tap gesture is detected
  426. begin function handler(gesture)
  427. # Do something
  428. end function
  429. # Create a tracker that detects tap gestures
  430. tracker = new TapTracker object # Where TapTracker is an implementation of
  431. # abstract Tracker
  432. add tracker tot window
  433. bind handler to tracker.tap
  434. # If the GUI toolkit allows it, bind window movement and resize handlers
  435. # that alter the position size and sieze of the window object
  436. # Start the gesture server (which in turn starts a driver-specific event
  437. # server)
  438. start server
  439. \end{verbatim}
  440. \chapter{Test applications}
  441. % TODO
  442. % testprogramma's met PyGame
  443. %\chapter{Conclusions}
  444. % TODO
  445. % Windows zijn een manier om globale events toe te wijzen aan vensters
  446. % Trackers zijn een effectieve manier om gebaren te detecteren
  447. % Trackers zijn uitbreidbaar door object-orientatie
  448. \chapter{Suggestions for future work}
  449. % TODO
  450. % geruik formele definitie van gestures in gesture trackers, bijv. state machine
  451. % Network protocol (ZeroMQ) voor meerdere talen en simultane processen
  452. % Hierij ook: extra laag die gesture windows aanmaakt die corresponderen met window manager
  453. % Window in boomstructuur voor efficientie
  454. \bibliographystyle{plain}
  455. \bibliography{report}{}
  456. \appendix
  457. \chapter{The TUIO protocol}
  458. \label{app:tuio}
  459. The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible
  460. objects, such as fingers or objects on a multi-touch table. Object information
  461. is sent to the TUIO UDP port (3333 by default).
  462. For efficiency reasons, the TUIO protocol is encoded using the Open Sound
  463. Control \cite[OSC]{OSC} format. An OSC server/client implementation is
  464. available for Python: pyOSC \cite{pyOSC}.
  465. A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}.
  466. However, the execution of an example script yields an error regarding Python's
  467. built-in \texttt{socket} library. Therefore, the reference implementation uses
  468. the pyOSC package to receive TUIO messages.
  469. The two most important message types of the protocol are ALIVE and SET
  470. messages. An ALIVE message contains the list of session id's that are currently
  471. ``active'', which in the case of multi-touch a table means that they are
  472. touching the screen. A SET message provides geometric information of a session
  473. id, such as position, velocity and acceleration.
  474. Each session id represents an object. The only type of objects on the
  475. multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y)
  476. position on the screen.
  477. ALIVE messages can be used to determine when an object touches and releases the
  478. screen. For example, if a session id was in the previous message but not in the
  479. current, The object it represents has been lifted from the screen.
  480. SET provide information about movement. In the case of simple (x, y) positions,
  481. only the movement vector of the position itself can be calculated. For more
  482. complex objects such as fiducials, arguments like rotational position is also
  483. included.
  484. ALIVE and SET messages can be combined to create ``point down'', ``point move''
  485. and ``point up'' events (as used by the \cite[.NET application]{win7touch}).
  486. TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left
  487. top corner of the screen and $(1.0, 1.0)$ the right bottom corner. To focus
  488. events within a window, a translation to window coordinates is required in the
  489. client application, as stated by the online specification
  490. \cite{TUIO_specification}:
  491. \begin{quote}
  492. In order to compute the X and Y coordinates for the 2D profiles a TUIO
  493. tracker implementation needs to divide these values by the actual sensor
  494. dimension, while a TUIO client implementation consequently can scale these
  495. values back to the actual screen dimension.
  496. \end{quote}
  497. \chapter{Experimental program}
  498. \label{app:experiment}
  499. % TODO: rewrite intro
  500. When designing a software library, its API should be understandable and easy to
  501. use for programmers. To find out the basic requirements of the API to be
  502. usable, an experimental program has been written based on the Processing code
  503. from \cite{processingMT}. The program receives TUIO events and translates them
  504. to point \emph{down}, \emph{move} and \emph{up} events. These events are then
  505. interpreted to be (double or single) \emph{tap}, \emph{rotation} or
  506. \emph{pinch} gestures. A simple drawing program then draws the current state to
  507. the screen using the PyGame library. The output of the program can be seen in
  508. figure \ref{fig:draw}.
  509. \begin{figure}[H]
  510. \center
  511. \label{fig:draw}
  512. \includegraphics[scale=0.4]{data/experimental_draw.png}
  513. \caption{Output of the experimental drawing program. It draws the touch
  514. points and their centroid on the screen (the centroid is used as center
  515. point for rotation and pinch detection). It also draws a green
  516. rectangle which responds to rotation and pinch events.}
  517. \end{figure}
  518. One of the first observations is the fact that TUIO's \texttt{SET} messages use
  519. the TUIO coordinate system, as described in appendix \ref{app:tuio}. The test
  520. program multiplies these with its own dimensions, thus showing the entire
  521. screen in its window. Also, the implementation only works using the TUIO
  522. protocol. Other drivers are not supported.
  523. Though using relatively simple math, the rotation and pinch events work
  524. surprisingly well. Both rotation and pinch use the centroid of all touch
  525. points. A \emph{rotation} gesture uses the difference in angle relative to the
  526. centroid of all touch points, and \emph{pinch} uses the difference in distance.
  527. Both values are normalized using division by the number of touch points. A
  528. pinch event contains a scale factor, and therefore uses a division of the
  529. current by the previous average distance to the centroid.
  530. There is a flaw in this implementation. Since the centroid is calculated using
  531. all current touch points, there cannot be two or more rotation or pinch
  532. gestures simultaneously. On a large multi-touch table, it is desirable to
  533. support interaction with multiple hands, or multiple persons, at the same time.
  534. This kind of application-specific requirements should be defined in the
  535. application itself, whereas the experimental implementation defines detection
  536. algorithms based on its test program.
  537. Also, the different detection algorithms are all implemented in the same file,
  538. making it complex to read or debug, and difficult to extend.
  539. \chapter{Reference implementation in Python}
  540. \label{app:implementation}
  541. % TODO
  542. % alleen window.contains op point down, niet move/up
  543. % een paar simpele windows en trackers
  544. \end{document}