report.tex 26 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523
  1. \documentclass[twoside,openright]{uva-bachelor-thesis}
  2. \usepackage[english]{babel}
  3. \usepackage[utf8]{inputenc}
  4. \usepackage{hyperref,graphicx,float,tikz}
  5. % Link colors
  6. \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen}
  7. % Title Page
  8. \title{A generic architecture for gesture-based interaction}
  9. \author{Taddeüs Kroes}
  10. \supervisors{Dr. Robert G. Belleman (UvA)}
  11. \signedby{Dr. Robert G. Belleman (UvA)}
  12. \begin{document}
  13. % Title page
  14. \maketitle
  15. \begin{abstract}
  16. % TODO
  17. \end{abstract}
  18. % Set paragraph indentation
  19. \parindent 0pt
  20. \parskip 1.5ex plus 0.5ex minus 0.2ex
  21. % Table of content on separate page
  22. \tableofcontents
  23. \chapter{Introduction}
  24. Surface-touch devices have evolved from pen-based tablets to single-touch
  25. trackpads, to multi-touch devices like smartphones and tablets. Multi-touch
  26. devices enable a user to interact with software using hand gestures, making the
  27. interaction more expressive and intuitive. These gestures are more complex than
  28. primitive ``click'' or ``tap'' events that are used by single-touch devices.
  29. Some examples of more complex gestures are so-called ``pinch''\footnote{A
  30. ``pinch'' gesture is formed by performing a pinching movement with multiple
  31. fingers on a multi-touch surface. Pinch gestures are often used to zoom in or
  32. out on an object.} and ``flick''\footnote{A ``flick'' gesture is the act of
  33. grabbing an object and throwing it in a direction on a touch surface, giving
  34. it momentum to move for some time after the hand releases the surface.}
  35. gestures.
  36. The complexity of gestures is not limited to navigation in smartphones. Some
  37. multi-touch devices are already capable of recognizing objects touching the
  38. screen \cite[Microsoft Surface]{mssurface}. In the near future, touch screens
  39. will possibly be extended or even replaced with in-air interaction (Microsoft's
  40. Kinect \cite{kinect} and the Leap \cite{leap}).
  41. The interaction devices mentioned above generate primitive events. In the case
  42. of surface-touch devices, these are \emph{down}, \emph{move} and \emph{up}
  43. events. Application programmers who want to incorporate complex, intuitive
  44. gestures in their application face the challenge of interpreting these
  45. primitive events as gestures. With the increasing complexity of gestures, the
  46. complexity of the logic required to detect these gestures increases as well.
  47. This challenge limits, or even deters the application developer to use complex
  48. gestures in an application.
  49. The main question in this research project is whether a generic architecture
  50. for the detection of complex interaction gestures can be designed, with the
  51. capability of managing the complexity of gesture detection logic.
  52. Application frameworks for surface-touch devices, such as Nokia's Qt \cite{qt},
  53. include the detection of commonly used gestures like \emph{pinch} gestures.
  54. However, this detection logic is dependent on the application framework.
  55. Consequently, an application developer who wants to use multi-touch interaction
  56. in an application is forced to choose an application framework that includes
  57. support for multi-touch gestures. Therefore, a requirement of the generic
  58. architecture is that it must not be bound to a specific application framework.
  59. Moreover, the set of supported gestures is limited by the application framework
  60. of choice. To incorporate a custom event in an application, the application
  61. developer needs to extend the framework. This requires extensive knowledge of
  62. the framework's architecture. Also, if the same gesture is used in another
  63. application that is based on another framework, the detection logic has to be
  64. translated for use in that framework. Nevertheless, application frameworks are
  65. a necessity when it comes to fast, cross-platform development. Therefore, the
  66. architecture design should aim to be compatible with existing frameworks, but
  67. provide a way to detect and extend gestures independent of the framework.
  68. An application framework is written in a specific programming language. A
  69. generic architecture should not limited to a single programming language. The
  70. ultimate goal of this thesis is to provide support for complex gesture
  71. interaction in any application. Thus, applications should be able to address
  72. the architecture using a language-independent method of communication. This
  73. intention leads towards the concept of a dedicated gesture detection
  74. application that serves gestures to multiple programs at the same time.
  75. The scope of this thesis is limited to the detection of gestures on multi-touch
  76. surface devices. It presents a design for a generic gesture detection
  77. architecture for use in multi-touch based applications. A reference
  78. implementation of this design is used in some test case applications, whose
  79. goal is to test the effectiveness of the design and detect its shortcomings.
  80. % FIXME: Moet deze nog in de introductie?
  81. % How can the input of the architecture be normalized? This is needed, because
  82. % multi-touch drivers use their own specific message format.
  83. \section{Structure of this document}
  84. % TODO: pas als thesis af is
  85. \chapter{Related work}
  86. \section{Gesture and Activity Recognition Toolkit}
  87. The Gesture and Activity Recognition Toolkit (GART) \cite{GART} is a
  88. toolkit for the development of gesture-based applications. The toolkit
  89. states that the best way to classify gestures is to use machine learning.
  90. The programmer trains a program to recognize using the machine learning
  91. library from the toolkit. The toolkit contains a callback mechanism that
  92. the programmer uses to execute custom code when a gesture is recognized.
  93. Though multi-touch input is not directly supported by the toolkit, the
  94. level of abstraction does allow for it to be implemented in the form of a
  95. ``touch'' sensor.
  96. The reason to use machine learning is the statement that gesture detection
  97. ``is likely to become increasingly complex and unmanageable'' when using a
  98. set of predefined rules to detect whether some sensor input can be seen as
  99. a specific gesture. This statement is not necessarily true. If the
  100. programmer is given a way to separate the detection of different types of
  101. gestures and flexibility in rule definitions, over-complexity can be
  102. avoided.
  103. \section{Gesture recognition implementation for Windows 7}
  104. The online article \cite{win7touch} presents a Windows 7 application,
  105. written in Microsofts .NET. The application shows detected gestures in a
  106. canvas. Gesture trackers keep track of stylus locations to detect specific
  107. gestures. The event types required to track a touch stylus are ``stylus
  108. down'', ``stylus move'' and ``stylus up'' events. A
  109. \texttt{GestureTrackerManager} object dispatches these events to gesture
  110. trackers. The application supports a limited number of pre-defined
  111. gestures.
  112. An important observation in this application is that different gestures are
  113. detected by different gesture trackers, thus separating gesture detection
  114. code into maintainable parts. The architecture has adopted this design
  115. feature by also using different gesture trackers to track different gesture
  116. types.
  117. % TODO: This is not really 'related', move it to somewhere else
  118. \section{Processing implementation of simple gestures in Android}
  119. An implementation of a detection architecture for some simple multi-touch
  120. gestures (tap, double tap, rotation, pinch and drag) using
  121. Processing\footnote{Processing is a Java-based development environment with
  122. an export possibility for Android. See also \url{http://processing.org/.}}
  123. can be found in a forum on the Processing website \cite{processingMT}. The
  124. implementation is fairly simple, but it yields some very appealing results.
  125. The detection logic of all gestures is combined in a single class. This
  126. does not allow for extendability, because the complexity of this class
  127. would increase to an undesirable level (as predicted by the GART article
  128. \cite{GART}). However, the detection logic itself is partially re-used in
  129. the reference implementation of the generic gesture detection architecture.
  130. \section{Analysis of related work}
  131. The simple Processing implementation of multi-touch events provides most of
  132. the functionality that can be found in existing multi-touch applications.
  133. In fact, many applications for mobile phones and tablets only use tap and
  134. scroll events. For this category of applications, using machine learning
  135. seems excessive. Though the representation of a gesture using a feature
  136. vector in a machine learning algorithm is a generic and formal way to
  137. define a gesture, a programmer-friendly architecture should also support
  138. simple, ``hard-coded'' detection code. A way to separate different pieces
  139. of gesture detection code, thus keeping a code library manageable and
  140. extendable, is to user different gesture trackers.
  141. % FIXME: change title below
  142. \chapter{Design}
  143. \label{chapter:design}
  144. % Diagrams are defined in a separate file
  145. \input{data/diagrams}
  146. \section{Introduction}
  147. This chapter describes the realization of a design for the generic
  148. multi-touch gesture detection architecture. The chapter represents the
  149. architecture as a diagram of relations between different components.
  150. Sections \ref{sec:driver-support} to \ref{sec:event-analysis} define
  151. requirements for the architecture, and extend the diagram with components
  152. that meet these requirements. Section \ref{sec:example} describes an
  153. example usage of the architecture in an application.
  154. \subsection*{Position of architecture in software}
  155. The input of the architecture comes from a multi-touch device driver.
  156. The task of the architecture is to translate this input to multi-touch
  157. gestures that are used by an application, as illustrated in figure
  158. \ref{fig:basicdiagram}. In the course of this chapter, the diagram is
  159. extended with the different components of the architecture.
  160. \basicdiagram{A diagram showing the position of the architecture
  161. relative to the device driver and a multi-touch application. The input
  162. of the architecture is given by a touch device driver. This output is
  163. translated to complex interaction gestures and passed to the
  164. application that is using the architecture.}
  165. \section{Supporting multiple drivers}
  166. \label{sec:driver-support}
  167. The TUIO protocol \cite{TUIO} is an example of a touch driver that can be
  168. used by multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate
  169. low-level touch events (see appendix \ref{app:tuio} for more details).
  170. These messages are specific to the API of the TUIO protocol. Other touch
  171. drivers may use very different messages types. To support more than
  172. one driver in the architecture, there must be some translation from
  173. driver-specific messages to a common format for primitive touch events.
  174. After all, the gesture detection logic in a ``generic'' architecture should
  175. not be implemented based on driver-specific messages. The event types in
  176. this format should be chosen so that multiple drivers can trigger the same
  177. events. If each supported driver adds its own set of event types to the
  178. common format, it the purpose of being ``common'' would be defeated.
  179. A reasonable expectation for a touch device driver is that it detects
  180. simple touch points, with a ``point'' being an object at an $(x, y)$
  181. position on the touch surface. This yields a basic set of events:
  182. $\{point\_down, point\_move, point\_up\}$.
  183. The TUIO protocol supports fiducials\footnote{A fiducial is a pattern used
  184. by some touch devices to identify objects.}, which also have a rotational
  185. property. This results in a more extended set: $\{point\_down, point\_move,
  186. point\_up, object\_down, object\_move, object\_up,\\ object\_rotate\}$.
  187. Due to their generic nature, the use of these events is not limited to the
  188. TUIO protocol. Another driver that can keep apart rotated objects from
  189. simple touch points could also trigger them.
  190. The component that translates driver-specific messages to common events,
  191. will be called the \emph{event driver}. The event driver runs in a loop,
  192. receiving and analyzing driver messages. When a sequence of messages is
  193. analyzed as an event, the event driver delegates the event to other
  194. components in the architecture for translation to gestures. This
  195. communication flow is illustrated in figure \ref{fig:driverdiagram}.
  196. A touch device driver can be supported by adding an event driver
  197. implementation for it. The event driver implementation that is used in an
  198. application is dependent of the support of the touch device.
  199. \driverdiagram{Extension of the diagram from figure \ref{fig:basicdiagram},
  200. showing the position of the event driver in the architecture. The event
  201. driver translates driver-specific to a common set of events, which are
  202. delegated to analysis components that will interpret them as more complex
  203. gestures.}
  204. \section{Restricting gestures to a screen area}
  205. Touch input devices are unaware of the graphical input widgets rendered on
  206. screen and therefore generate events that simply identify the screen
  207. location at which an event takes place. In order to be able to direct a
  208. gesture to a particular widget on screen, an application programmer should
  209. be able to bind a gesture handler to some element on the screen. For
  210. example, a button tap\footnote{A ``tap'' gesture is triggered when a touch
  211. object releases the screen within a certain time and distance from the
  212. point where it initially touched the screen.} should only occur on the
  213. button itself, and not in any other area of the screen. A solution to this
  214. problem is the use of \emph{widgets}. The button from the example can be
  215. represented as a rectangular widget with a position and size. The position
  216. and size are compared with event coordinates to determine whether an event
  217. should occur within the button.
  218. \subsection*{Widget tree}
  219. A problem occurs when widgets overlap. If a button in placed over a
  220. container and an event occurs occurs inside the button, should the
  221. button handle the event first? And, should the container receive the
  222. event at all or should it be reserved for the button?.
  223. The solution to this problem is to save widgets in a tree structure.
  224. There is one root widget, whose size is limited by the size of the
  225. touch screen. Being the leaf widget, and thus the widget that is
  226. actually touched when an object touches the device, the button widget
  227. should receive an event before its container does. However, events
  228. occur on a screen-wide level and thus at the root level of the widget
  229. tree. Therefore, an event is delegated in the tree before any analysis
  230. is performed. Delegation stops at the ``lowest'' widget in the three
  231. containing the event coordinates. That widget then performs some
  232. analysis of the event, after which the event is released back to the
  233. parent widget for analysis. This release of an event to a parent widget
  234. is called \emph{propagation}. To be able to reserve an event to some
  235. widget or analysis, the propagation of an event can be stopped during
  236. analysis.
  237. % TODO: insprired by JavaScript DOM
  238. Many GUI frameworks, like GTK \cite{GTK}, also use a tree structure to
  239. manage their widgets. This makes it easy to connect the architecture to
  240. such a framework. For example, the programmer can define a
  241. \texttt{GtkTouchWidget} that synchronises the position of a touch
  242. widget with that of a GTK widget, using GTK signals.
  243. \subsection*{Callbacks}
  244. \label{sec:callbacks}
  245. When an event is propagated by a widget, it is first used for event
  246. analysis on that widget. The event analysis can then trigger a gesture
  247. in the widget, which has to be handled by the application. To handle a
  248. gesture, the widget should provide a callback mechanism: the
  249. application binds a handler for a specific type of gesture to a widget.
  250. When a gesture of that type is triggered after event analysis, the
  251. widget triggers the callback.
  252. \subsection*{Position of widget tree in architecture}
  253. \widgetdiagram{Extension of the diagram from figure
  254. \ref{fig:driverdiagram}, showing the position of widgets in the
  255. architecture.}
  256. \section{Event analysis}
  257. \label{sec:event-analysis}
  258. The events that are delegated to widgets must be analyzed in some way to
  259. gestures. This analysis is specific to the type of gesture being detected.
  260. E.g. the detection of a ``tap'' gesture is very different from detection of
  261. a ``rotate'' gesture. The implementation described in \cite{win7touch}
  262. separates the detection of different gestures into different \emph{gesture
  263. trackers}. This keeps the different pieces of detection code managable and
  264. extandable. Therefore, the architecture also uses gesture trackers to
  265. separate the analysis of events. A single gesture tracker detects a
  266. specific set of gesture types, given a sequence of events. An example of a
  267. possible gesture tracker implementation is a ``transformation tracker''
  268. that detects rotation, scaling and translation gestures.
  269. \subsection*{Assignment of a gesture tracker to a widget}
  270. As explained in section \ref{sec:callbacks}, events are delegated from
  271. a widget to some event analysis. The analysis component of a widget
  272. consists of a list of gesture trackers, each tracking a specific set of
  273. gestures. No two trackers in the list should be tracking the same
  274. gesture type.
  275. When a handler for a gesture is ``bound'' to a widget, the widget
  276. asserts that it has a tracker that is tracking this gesture. Thus, the
  277. programmer does not create gesture trackers manually. Figure
  278. \ref{fig:trackerdiagram} shows the position of gesture trackers in the
  279. architecture.
  280. \trackerdiagram{Extension of the diagram from figure
  281. \ref{fig:widgetdiagram}, showing the position of gesture trackers in
  282. the architecture.}
  283. \section{Serving multiple applications}
  284. % TODO
  285. \section{Example usage}
  286. \label{sec:example}
  287. This section describes an example that illustrates the API of the
  288. architecture. The example application listens to tap events on a button.
  289. The button is located inside an application window, which can be resized
  290. using pinch gestures.
  291. \begin{verbatim}
  292. initialize GUI, creating a window
  293. # Add widgets representing the application window and button
  294. rootwidget = new rectangular Widget object
  295. set rootwidget position and size to that of the application window
  296. buttonwidget = new rectangular Widget object
  297. set buttonwidget position and size to that of the GUI button
  298. # Create an event server that will be started later
  299. server = new EventServer object
  300. set rootwidget as root widget for server
  301. # Define handlers and bind them to corresponding widgets
  302. begin function resize_handler(gesture)
  303. resize GUI window
  304. update position and size of root wigdet
  305. end function
  306. begin function tap_handler_handler(gesture)
  307. # Perform some action that the button is meant to do
  308. end function
  309. bind ('pinch', resize_handler) to rootwidget
  310. bind ('tap', tap_handler) to buttonwidget
  311. # Start event server (which in turn starts a driver-specific event server)
  312. start server
  313. \end{verbatim}
  314. \examplediagram{Diagram representation of the example above. Dotted arrows
  315. represent gestures, normal arrows represent events (unless labeled
  316. otherwise).}
  317. \chapter{Test applications}
  318. To test multi-touch interaction properly, a multi-touch device is required. The
  319. University of Amsterdam (UvA) has provided access to a multi-touch table from
  320. PQlabs. The table uses the TUIO protocol \cite{TUIO} to communicate touch
  321. events. See appendix \ref{app:tuio} for details regarding the TUIO protocol.
  322. The reference implementation is a Proof of Concept that translates TUIO
  323. messages to some simple touch gestures (see appendix \ref{app:implementation}
  324. for details).
  325. % TODO
  326. % testprogramma's met PyGame/Cairo
  327. \chapter{Suggestions for future work}
  328. % TODO
  329. % gebruik formele definitie van gestures in gesture trackers, bijv. state machine
  330. % network protocol (ZeroMQ) voor meerdere talen en simultane processen
  331. % tussenlaag die widget tree synchroniseert met een applicatieframework
  332. \bibliographystyle{plain}
  333. \bibliography{report}{}
  334. \appendix
  335. \chapter{The TUIO protocol}
  336. \label{app:tuio}
  337. The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible
  338. objects, such as fingers or objects on a multi-touch table. Object information
  339. is sent to the TUIO UDP port (3333 by default).
  340. For efficiency reasons, the TUIO protocol is encoded using the Open Sound
  341. Control \cite[OSC]{OSC} format. An OSC server/client implementation is
  342. available for Python: pyOSC \cite{pyOSC}.
  343. A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}.
  344. However, the execution of an example script yields an error regarding Python's
  345. built-in \texttt{socket} library. Therefore, the reference implementation uses
  346. the pyOSC package to receive TUIO messages.
  347. The two most important message types of the protocol are ALIVE and SET
  348. messages. An ALIVE message contains the list of session id's that are currently
  349. ``active'', which in the case of multi-touch a table means that they are
  350. touching the screen. A SET message provides geometric information of a session
  351. id, such as position, velocity and acceleration.
  352. Each session id represents an object. The only type of objects on the
  353. multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y)
  354. position on the screen.
  355. ALIVE messages can be used to determine when an object touches and releases the
  356. screen. For example, if a session id was in the previous message but not in the
  357. current, The object it represents has been lifted from the screen.
  358. SET provide information about movement. In the case of simple (x, y) positions,
  359. only the movement vector of the position itself can be calculated. For more
  360. complex objects such as fiducials, arguments like rotational position and
  361. acceleration are also included.
  362. ALIVE and SET messages can be combined to create ``point down'', ``point move''
  363. and ``point up'' events (as used by the \cite[.NET application]{win7touch}).
  364. TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left
  365. top corner of the screen and $(1.0, 1.0)$ the right bottom corner. To focus
  366. events within a window, a translation to window coordinates is required in the
  367. client application, as stated by the online specification
  368. \cite{TUIO_specification}:
  369. \begin{quote}
  370. In order to compute the X and Y coordinates for the 2D profiles a TUIO
  371. tracker implementation needs to divide these values by the actual sensor
  372. dimension, while a TUIO client implementation consequently can scale these
  373. values back to the actual screen dimension.
  374. \end{quote}
  375. \chapter{Experimental program}
  376. \label{app:experiment}
  377. % TODO: rewrite intro
  378. When designing a software library, its API should be understandable and easy to
  379. use for programmers. To find out the basic requirements of the API to be
  380. usable, an experimental program has been written based on the Processing code
  381. from \cite{processingMT}. The program receives TUIO events and translates them
  382. to point \emph{down}, \emph{move} and \emph{up} events. These events are then
  383. interpreted to be (double or single) \emph{tap}, \emph{rotation} or
  384. \emph{pinch} gestures. A simple drawing program then draws the current state to
  385. the screen using the PyGame library. The output of the program can be seen in
  386. figure \ref{fig:draw}.
  387. \begin{figure}[H]
  388. \center
  389. \label{fig:draw}
  390. \includegraphics[scale=0.4]{data/experimental_draw.png}
  391. \caption{Output of the experimental drawing program. It draws the touch
  392. points and their centroid on the screen (the centroid is used as center
  393. point for rotation and pinch detection). It also draws a green
  394. rectangle which responds to rotation and pinch events.}
  395. \end{figure}
  396. One of the first observations is the fact that TUIO's \texttt{SET} messages use
  397. the TUIO coordinate system, as described in appendix \ref{app:tuio}. The test
  398. program multiplies these with its own dimensions, thus showing the entire
  399. screen in its window. Also, the implementation only works using the TUIO
  400. protocol. Other drivers are not supported.
  401. Though using relatively simple math, the rotation and pinch events work
  402. surprisingly well. Both rotation and pinch use the centroid of all touch
  403. points. A \emph{rotation} gesture uses the difference in angle relative to the
  404. centroid of all touch points, and \emph{pinch} uses the difference in distance.
  405. Both values are normalized using division by the number of touch points. A
  406. pinch event contains a scale factor, and therefore uses a division of the
  407. current by the previous average distance to the centroid.
  408. There is a flaw in this implementation. Since the centroid is calculated using
  409. all current touch points, there cannot be two or more rotation or pinch
  410. gestures simultaneously. On a large multi-touch table, it is desirable to
  411. support interaction with multiple hands, or multiple persons, at the same time.
  412. This kind of application-specific requirements should be defined in the
  413. application itself, whereas the experimental implementation defines detection
  414. algorithms based on its test program.
  415. Also, the different detection algorithms are all implemented in the same file,
  416. making it complex to read or debug, and difficult to extend.
  417. \chapter{Reference implementation in Python}
  418. \label{app:implementation}
  419. % TODO
  420. % alleen window.contains op point down, niet move/up
  421. % een paar simpele windows en trackers
  422. \end{document}