report.tex 23 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485
  1. \documentclass[twoside,openright]{uva-bachelor-thesis}
  2. \usepackage[english]{babel}
  3. \usepackage[utf8]{inputenc}
  4. \usepackage{hyperref,graphicx,float,tikz,subfigure}
  5. % Link colors
  6. \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen}
  7. % Title Page
  8. \title{A generic architecture for the detection of multi-touch gestures}
  9. \author{Taddeüs Kroes}
  10. \supervisors{Dr. Robert G. Belleman (UvA)}
  11. \signedby{Dr. Robert G. Belleman (UvA)}
  12. \begin{document}
  13. % Title page
  14. \maketitle
  15. \begin{abstract}
  16. % TODO
  17. \end{abstract}
  18. % Set paragraph indentation
  19. \parindent 0pt
  20. \parskip 1.5ex plus 0.5ex minus 0.2ex
  21. % Table of content on separate page
  22. \tableofcontents
  23. \chapter{Introduction}
  24. % TODO: put Qt link in bibtex
  25. Multi-touch devices enable a user to interact with software using intuitive
  26. body gestures, rather than with interaction tools like mouse and keyboard.
  27. With the upcoming use of touch screens in phones and tablets, multi-touch
  28. interaction is becoming increasingly common.The driver of a touch device
  29. provides low-level events. The most basic representation of these low-level
  30. event consists of \emph{down}, \emph{move} and \emph{up} events.
  31. Multi-touch gestures must be designed in such a way, that they can be
  32. represented by a sequence of basic events. For example, a ``tap'' gesture can
  33. be represented as a \emph{down} event that is followed by an \emph{up} event
  34. within a certain time.
  35. The translation process of driver-specific messages to basic events, and events
  36. to multi-touch gestures is a process that is often embedded in multi-touch
  37. application frameworks, like Nokia's Qt \cite{qt}. However, there is no
  38. separate implementation of the process itself. Consequently, an application
  39. developer who wants to use multi-touch interaction in an application is forced
  40. to choose an application framework that includes support for multi-touch
  41. gestures. Moreover, the set of supported gestures is limited by the application
  42. framework. To incorporate some custom event in an application, the chosen
  43. framework needs to provide a way to extend existing multi-touch gestures.
  44. % Hoofdvraag
  45. The goal of this thesis is to create a generic architecture for the support of
  46. multi-touch gestures in applications. To test the design of the architecture, a
  47. reference implementation is written in Python. The architecture should
  48. incorporate the translation process of low-level driver messages to multi-touch
  49. gestures. It should be able to run beside an application framework. The
  50. definition of multi-touch gestures should allow extensions, so that custom
  51. gestures can be defined.
  52. % Deelvragen
  53. To design such an architecture properly, the following questions are relevant:
  54. \begin{itemize}
  55. \item What is the input of the architecture? This is determined by the
  56. output of multi-touch drivers.
  57. \item How can extendability of the supported gestures be accomplished?
  58. % TODO: zijn onderstaande nog relevant? beter omschrijven naar "Design"
  59. % gerelateerde vragen?
  60. \item How can the architecture be used by different programming languages?
  61. A generic architecture should not be limited to be used in only one
  62. language.
  63. \item Can events be used by multiple processes at the same time? For
  64. example, a network implementation could run as a service instead of
  65. within a single application, triggering events in any application that
  66. needs them.
  67. \end{itemize}
  68. % Afbakening
  69. The scope of this thesis includes the design of a generic multi-touch detection
  70. architecture, a reference implementation of this design written in Python, and
  71. the integration of the reference implementation in a test case application. To
  72. test multi-touch interaction properly, a multi-touch device is required. The
  73. University of Amsterdam (UvA) has provided access to a multi-touch table from
  74. PQlabs. The table uses the TUIO protocol \cite{TUIO} to communicate touch
  75. events. See appendix \ref{app:tuio} for details regarding the TUIO protocol.
  76. The reference implementation is a Proof of Concept that translates TUIO
  77. messages to some simple touch gestures (see appendix \ref{app:implementation}
  78. for details).
  79. \section{Structure of this document}
  80. % TODO: pas als thesis af is
  81. \chapter{Related work}
  82. \section{Gesture and Activity Recognition Toolkit}
  83. The Gesture and Activity Recognition Toolkit (GART) \cite{GART} is a
  84. toolkit for the development of gesture-based applications. The toolkit
  85. states that the best way to classify gestures is to use machine learning.
  86. The programmer trains a program to recognize using the machine learning
  87. library from the toolkit. The toolkit contains a callback mechanism that
  88. the programmer uses to execute custom code when a gesture is recognized.
  89. Though multi-touch input is not directly supported by the toolkit, the
  90. level of abstraction does allow for it to be implemented in the form of a
  91. ``touch'' sensor.
  92. The reason to use machine learning is the statement that gesture detection
  93. ``is likely to become increasingly complex and unmanageable'' when using a
  94. set of predefined rules to detect whether some sensor input can be seen as
  95. a specific gesture. This statement is not necessarily true. If the
  96. programmer is given a way to separate the detection of different types of
  97. gestures and flexibility in rule definitions, over-complexity can be
  98. avoided.
  99. % oplossing: trackers. bijv. TapTracker, TransformationTracker gescheiden
  100. \section{Gesture recognition software for Windows 7}
  101. The online article \cite{win7touch} presents a Windows 7 application,
  102. written in Microsofts .NET. The application shows detected gestures in a
  103. canvas. Gesture trackers keep track of stylus locations to detect specific
  104. gestures. The event types required to track a touch stylus are ``stylus
  105. down'', ``stylus move'' and ``stylus up'' events. A
  106. \texttt{GestureTrackerManager} object dispatches these events to gesture
  107. trackers. The application supports a limited number of pre-defined
  108. gestures.
  109. An important observation in this application is that different gestures are
  110. detected by different gesture trackers, thus separating gesture detection
  111. code into maintainable parts.
  112. \section{Processing implementation of simple gestures in Android}
  113. An implementation of a detection architecture for some simple multi-touch
  114. gestures (tap, double tap, rotation, pinch and drag) using
  115. Processing\footnote{Processing is a Java-based development environment with
  116. an export possibility for Android. See also \url{http://processing.org/.}}
  117. can be found found in a forum on the Processing website
  118. \cite{processingMT}. The implementation is fairly simple, but it yields
  119. some very appealing results. The detection logic of all gestures is
  120. combined in a single class. This does not allow for extendability, because
  121. the complexity of this class would increase to an undesirable level (as
  122. predicted by the GART article \cite{GART}). However, the detection logic
  123. itself is partially re-used in the reference implementation of the
  124. generic gesture detection architecture.
  125. \section{Analysis of related work}
  126. The simple Processing implementation of multi-touch events provides most of
  127. the functionality that can be found in existing multi-touch applications.
  128. In fact, many applications for mobile phones and tablets only use tap and
  129. scroll events. For this category of applications, using machine learning
  130. seems excessive. Though the representation of a gesture using a feature
  131. vector in a machine learning algorithm is a generic and formal way to
  132. define a gesture, a programmer-friendly architecture should also support
  133. simple, ``hard-coded'' detection code. A way to separate different pieces
  134. of gesture detection code, thus keeping a code library manageable and
  135. extendable, is to user different gesture trackers.
  136. % FIXME: change title below
  137. \chapter{Design}
  138. \label{chapter:design}
  139. % Diagrams are defined in a separate file
  140. \input{data/diagrams}
  141. \section{Introduction}
  142. This chapter describes the realization of a design for the generic
  143. multi-touch gesture detection architecture. The chapter represents the
  144. architecture as a diagram of relations between different components.
  145. Sections \ref{sec:driver-support} to \ref{sec:event-analysis} define
  146. requirements for the archtitecture, and extend the diagram with components
  147. that meet these requirements. Section \ref{sec:example} desicribes an
  148. example usage of the architecture in an application.
  149. \subsection*{Position of architecture in software}
  150. The input of the architecture comes from some multi-touch device
  151. driver. For example, the table used in the experiments uses the TUIO
  152. protocol. The task of the architecture is to translate this input to
  153. multi-touch gestures that are used by an application, as illustrated in
  154. figure \ref{fig:basicdiagram}. In the course of this chapter, the
  155. diagram is extended with the different components of the architecture.
  156. \basicdiagram{A diagram showing the position of the architecture
  157. relative to the device driver and a multi-touch application.}
  158. \section{Supporting multiple drivers}
  159. \label{sec:driver-support}
  160. The TUIO protocol is an example of a touch driver that can be used by
  161. multi-touch devices. Other drivers do exist, which should also be supported
  162. by the architecture. Therefore, there must be some translation of
  163. driver-specific messages to a common format in the arcitecture. Messages in
  164. this common format will be called \emph{events}. Events can be translated
  165. to multi-touch \emph{gestures}. The most basic set of events is
  166. $\{point\_down, point\_move, point\_up\}$. Here, a ``point'' is a touch
  167. object with only an (x, y) position on the screen.
  168. A more extended set could also contain more complex events. An object can
  169. also have a rotational property, like the ``fiducials'' type in the TUIO
  170. protocol. This results in $\{point\_down, point\_move,\\point\_up,
  171. object\_down, object\_move, object\_up, object\_rotate\}$.
  172. The component that translates driver-specific messages to events, is called
  173. the \emph{event driver}. The event driver runs in a loop, receiving and
  174. analyzing driver messages. The event driver that is used in an application
  175. is dependent of the support of the multi-touch device.
  176. When a sequence of messages is analyzed as an event, the event driver
  177. delegates the event to other components in the architecture for translation
  178. to gestures.
  179. \driverdiagram{Extension of the diagram from figure \ref{fig:basicdiagram},
  180. showing the position of the event driver in the architecture.}
  181. \section{Restricting gestures to a screen area}
  182. An application programmer should be able to bind a gesture handler to some
  183. element on the screen. For example, a button tap\footnote{A ``tap'' gesture
  184. is triggered when a touch object releases the screen within a certain time
  185. and distance from the point where it initially touched the screen.} should
  186. only occur on the button itself, and not in any other area of the screen. A
  187. solution to this program is the use of \emph{widgets}. The button from the
  188. example can be represented as a rectangular widget with a position and
  189. size. The position and size are compared with event coordinates to
  190. determine whether an event should occur within the button.
  191. \subsection*{Widget tree}
  192. A problem occurs when widgets overlap. If a button in placed over a
  193. container and an event occurs occurs inside the button, should the
  194. button handle the event first? And, should the container receive the
  195. event at all or should it be reserved for the button?.
  196. The solution to this problem is to save widgets in a tree structure.
  197. There is one root widget, whose size is limited by the size of the
  198. touch screen. Being the leaf widget, and thus the widget that is
  199. actually touched when an object touches the device, the button widget
  200. should receive an event before its container does. However, events
  201. occur on a screen-wide level and thus at the root level of the widget
  202. tree. Therefore, an event is delegated in the tree before any analysis
  203. is performed. Delegation stops at the ``lowest'' widget in the three
  204. containing the event coordinates. That widget then performs some
  205. analysis of the event, after which the event is released back to the
  206. parent widget for analysis. This release of an event to a parent widget
  207. is called \emph{propagation}. To be able to reserve an event to some
  208. widget or analysis, the propagation of an event can be stopped during
  209. analysis.
  210. % TODO: insprired by JavaScript DOM
  211. % TODO: add GTK to bibliography
  212. Many GUI frameworks, like GTK \cite{GTK}, also use a tree structure to
  213. manage their widgets. This makes it easy to connect the architecture to
  214. such a framework. For example, the programmer can define a
  215. \texttt{GtkTouchWidget} that synchronises the position of a touch
  216. widget with that of a GTK widget, using GTK signals.
  217. \subsection*{Callbacks}
  218. \label{sec:callbacks}
  219. When an event is propagated by a widget, it is first used for event
  220. analysis on that widget. The event analysis can then trigger a gesture
  221. in the widget, which has to be handled by the application. To handle a
  222. gesture, the widget should provide a callback mechanism: the
  223. application binds a handler for a specific type of gesture to a widget.
  224. When a gesture of that type is triggered after event analysis, the
  225. widget triggers the callback.
  226. \subsection*{Position of widget tree in architecture}
  227. \widgetdiagram{Extension of the diagram from figure
  228. \ref{fig:driverdiagram}, showing the position of widgets in the
  229. architecture.}
  230. \section{Event analysis}
  231. \label{sec:event-analysis}
  232. The events that are delegated to widgets must be analyzed in some way to
  233. from gestures. This analysis is specific to the type of gesture being
  234. detected. E.g. the detection of a ``tap'' gesture is very different from
  235. detection of a ``rotate'' gesture. The \cite[.NET
  236. implementation]{win7touch} separates the detection of different gestures
  237. into different \emph{gesture trackers}. This keeps the different pieces of
  238. detection code managable and extandable. Therefore, the architecture also
  239. uses gesture trackers to separate the analysis of events. A single gesture
  240. tracker detects a specific set of gesture types, given a sequence of
  241. events. An example of a possible gesture tracker implementation is a
  242. ``transformation tracker'' that detects rotation, scaling and translation
  243. gestures.
  244. \subsection*{Assignment of a gesture tracker to a widget}
  245. As explained in section \ref{sec:callbacks}, events are delegated from
  246. a widget to some event analysis. The analysis component of a widget
  247. consists of a list of gesture trackers, each tracking a specific set of
  248. gestures. No two trackers in the list should be tracking the same
  249. gesture type.
  250. When a handler for a gesture is ``bound'' to a widget, the widget
  251. asserts that it has a tracker that is tracking this gesture. Thus, the
  252. programmer does not create gesture trackers manually. Figure
  253. \ref{fig:trackerdiagram} shows the position of gesture trackers in the
  254. architecture.
  255. \trackerdiagram{Extension of the diagram from figure
  256. \ref{fig:widgetdiagram}, showing the position of gesture trackers in
  257. the architecture.}
  258. \section{Example usage}
  259. \label{sec:example}
  260. This section describes an example that illustrates the API of the
  261. architecture. The example application listens to tap events on a button.
  262. The button is located inside an application window, which can be resized
  263. using pinch gestures.
  264. \begin{verbatim}
  265. initialize GUI, creating a window
  266. # Add widgets representing the application window and button
  267. rootwidget = new rectangular Widget object
  268. set rootwidget position and size to that of the application window
  269. buttonwidget = new rectangular Widget object
  270. set buttonwidget position and size to that of the GUI button
  271. # Create an event server that will be started later
  272. server = new EventServer object
  273. set rootwidget as root widget for server
  274. # Define handlers and bind them to corresponding widgets
  275. begin function resize_handler(gesture)
  276. resize GUI window
  277. update position and size of root wigdet
  278. end function
  279. begin function tap_handler_handler(gesture)
  280. # Perform some action that the button is meant to do
  281. end function
  282. bind ('pinch', resize_handler) to rootwidget
  283. bind ('tap', tap_handler) to buttonwidget
  284. # Start event server (which in turn starts a driver-specific event server)
  285. start server
  286. \end{verbatim}
  287. \examplediagram{Diagram representation of the example above. Dotted arrows
  288. represent gestures, normal arrows represent events (unless labeled
  289. otherwise).}
  290. \chapter{Test applications}
  291. % TODO
  292. % testprogramma's met PyGame/Cairo
  293. \chapter{Suggestions for future work}
  294. % TODO
  295. % gebruik formele definitie van gestures in gesture trackers, bijv. state machine
  296. % network protocol (ZeroMQ) voor meerdere talen en simultane processen
  297. % tussenlaag die widget tree synchroniseert met een applicatieframework
  298. \bibliographystyle{plain}
  299. \bibliography{report}{}
  300. \appendix
  301. \chapter{The TUIO protocol}
  302. \label{app:tuio}
  303. The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible
  304. objects, such as fingers or objects on a multi-touch table. Object information
  305. is sent to the TUIO UDP port (3333 by default).
  306. For efficiency reasons, the TUIO protocol is encoded using the Open Sound
  307. Control \cite[OSC]{OSC} format. An OSC server/client implementation is
  308. available for Python: pyOSC \cite{pyOSC}.
  309. A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}.
  310. However, the execution of an example script yields an error regarding Python's
  311. built-in \texttt{socket} library. Therefore, the reference implementation uses
  312. the pyOSC package to receive TUIO messages.
  313. The two most important message types of the protocol are ALIVE and SET
  314. messages. An ALIVE message contains the list of session id's that are currently
  315. ``active'', which in the case of multi-touch a table means that they are
  316. touching the screen. A SET message provides geometric information of a session
  317. id, such as position, velocity and acceleration.
  318. Each session id represents an object. The only type of objects on the
  319. multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y)
  320. position on the screen.
  321. ALIVE messages can be used to determine when an object touches and releases the
  322. screen. For example, if a session id was in the previous message but not in the
  323. current, The object it represents has been lifted from the screen.
  324. SET provide information about movement. In the case of simple (x, y) positions,
  325. only the movement vector of the position itself can be calculated. For more
  326. complex objects such as fiducials, arguments like rotational position and
  327. acceleration are also included.
  328. ALIVE and SET messages can be combined to create ``point down'', ``point move''
  329. and ``point up'' events (as used by the \cite[.NET application]{win7touch}).
  330. TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left
  331. top corner of the screen and $(1.0, 1.0)$ the right bottom corner. To focus
  332. events within a window, a translation to window coordinates is required in the
  333. client application, as stated by the online specification
  334. \cite{TUIO_specification}:
  335. \begin{quote}
  336. In order to compute the X and Y coordinates for the 2D profiles a TUIO
  337. tracker implementation needs to divide these values by the actual sensor
  338. dimension, while a TUIO client implementation consequently can scale these
  339. values back to the actual screen dimension.
  340. \end{quote}
  341. \chapter{Experimental program}
  342. \label{app:experiment}
  343. % TODO: rewrite intro
  344. When designing a software library, its API should be understandable and easy to
  345. use for programmers. To find out the basic requirements of the API to be
  346. usable, an experimental program has been written based on the Processing code
  347. from \cite{processingMT}. The program receives TUIO events and translates them
  348. to point \emph{down}, \emph{move} and \emph{up} events. These events are then
  349. interpreted to be (double or single) \emph{tap}, \emph{rotation} or
  350. \emph{pinch} gestures. A simple drawing program then draws the current state to
  351. the screen using the PyGame library. The output of the program can be seen in
  352. figure \ref{fig:draw}.
  353. \begin{figure}[H]
  354. \center
  355. \label{fig:draw}
  356. \includegraphics[scale=0.4]{data/experimental_draw.png}
  357. \caption{Output of the experimental drawing program. It draws the touch
  358. points and their centroid on the screen (the centroid is used as center
  359. point for rotation and pinch detection). It also draws a green
  360. rectangle which responds to rotation and pinch events.}
  361. \end{figure}
  362. One of the first observations is the fact that TUIO's \texttt{SET} messages use
  363. the TUIO coordinate system, as described in appendix \ref{app:tuio}. The test
  364. program multiplies these with its own dimensions, thus showing the entire
  365. screen in its window. Also, the implementation only works using the TUIO
  366. protocol. Other drivers are not supported.
  367. Though using relatively simple math, the rotation and pinch events work
  368. surprisingly well. Both rotation and pinch use the centroid of all touch
  369. points. A \emph{rotation} gesture uses the difference in angle relative to the
  370. centroid of all touch points, and \emph{pinch} uses the difference in distance.
  371. Both values are normalized using division by the number of touch points. A
  372. pinch event contains a scale factor, and therefore uses a division of the
  373. current by the previous average distance to the centroid.
  374. There is a flaw in this implementation. Since the centroid is calculated using
  375. all current touch points, there cannot be two or more rotation or pinch
  376. gestures simultaneously. On a large multi-touch table, it is desirable to
  377. support interaction with multiple hands, or multiple persons, at the same time.
  378. This kind of application-specific requirements should be defined in the
  379. application itself, whereas the experimental implementation defines detection
  380. algorithms based on its test program.
  381. Also, the different detection algorithms are all implemented in the same file,
  382. making it complex to read or debug, and difficult to extend.
  383. \chapter{Reference implementation in Python}
  384. \label{app:implementation}
  385. % TODO
  386. % alleen window.contains op point down, niet move/up
  387. % een paar simpele windows en trackers
  388. \end{document}