report.tex 42 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822
  1. \documentclass[twoside,openright]{uva-bachelor-thesis}
  2. \usepackage[english]{babel}
  3. \usepackage[utf8]{inputenc}
  4. \usepackage{hyperref,graphicx,tikz,subfigure,float}
  5. % Link colors
  6. \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen}
  7. % Title Page
  8. \title{A generic architecture for gesture-based interaction}
  9. \author{Taddeüs Kroes}
  10. \supervisors{Dr. Robert G. Belleman (UvA)}
  11. \signedby{Dr. Robert G. Belleman (UvA)}
  12. \begin{document}
  13. % Title page
  14. \maketitle
  15. \begin{abstract}
  16. Applications that use
  17. complex gesture-based interaction need to translate primitive messages from
  18. low-level device drivers to complex, high-level gestures, and map these
  19. gestures to elements in an application. This report presents a generic
  20. architecture for the detection of complex gestures in an application. The
  21. architecture translates device driver messages to a common set of
  22. ``events''. The events are then delegated to a tree of ``areas'', which are
  23. used to separate groups of events and assign these groups to an element in
  24. the application. Gesture detection is performed on a group of events
  25. assigned to an area, using detection units called ``gesture tackers''. An
  26. implementation of the architecture as a daemon process would be capable of
  27. serving gestures to multiple applications at the same time. A reference
  28. implementation and two test case applications have been created to test the
  29. effectiveness of the architecture design.
  30. \end{abstract}
  31. % Set paragraph indentation
  32. \parindent 0pt
  33. \parskip 1.5ex plus 0.5ex minus 0.2ex
  34. % Table of content on separate page
  35. \tableofcontents
  36. \chapter{Introduction}
  37. \label{chapter:introduction}
  38. Surface-touch devices have evolved from pen-based tablets to single-touch
  39. trackpads, to multi-touch devices like smartphones and tablets. Multi-touch
  40. devices enable a user to interact with software using hand gestures, making the
  41. interaction more expressive and intuitive. These gestures are more complex than
  42. primitive ``click'' or ``tap'' events that are used by single-touch devices.
  43. Some examples of more complex gestures are ``pinch''\footnote{A ``pinch''
  44. gesture is formed by performing a pinching movement with multiple fingers on a
  45. multi-touch surface. Pinch gestures are often used to zoom in or out on an
  46. object.} and ``flick''\footnote{A ``flick'' gesture is the act of grabbing an
  47. object and throwing it in a direction on a touch surface, giving it momentum to
  48. move for some time after the hand releases the surface.} gestures.
  49. The complexity of gestures is not limited to navigation in smartphones. Some
  50. multi-touch devices are already capable of recognizing objects touching the
  51. screen \cite[Microsoft Surface]{mssurface}. In the near future, touch screens
  52. will possibly be extended or even replaced with in-air interaction (Microsoft's
  53. Kinect \cite{kinect} and the Leap \cite{leap}).
  54. The interaction devices mentioned above generate primitive events. In the case
  55. of surface-touch devices, these are \emph{down}, \emph{move} and \emph{up}
  56. events. Application programmers who want to incorporate complex, intuitive
  57. gestures in their application face the challenge of interpreting these
  58. primitive events as gestures. With the increasing complexity of gestures, the
  59. complexity of the logic required to detect these gestures increases as well.
  60. This challenge limits, or even deters the application developer to use complex
  61. gestures in an application.
  62. The main question in this research project is whether a generic architecture
  63. for the detection of complex interaction gestures can be designed, with the
  64. capability of managing the complexity of gesture detection logic. The ultimate
  65. goal would be to create an implementation of this architecture that can be
  66. extended to support a wide range of complex gestures. With the existence of
  67. such an implementation, application developers do not need to reinvent gesture
  68. detection for every new gesture-based application.
  69. Application frameworks for surface-touch devices, such as Nokia's Qt \cite{qt},
  70. do already include the detection of commonly used gestures like \emph{pinch}
  71. gestures. However, this detection logic is dependent on the application
  72. framework. Consequently, an application developer who wants to use multi-touch
  73. interaction in an application is forced to use an application framework that
  74. includes support for multi-touch gestures. Moreover, the set of supported
  75. gestures is limited by the application framework of choice. To incorporate a
  76. custom event in an application, the application developer needs to extend the
  77. framework. This requires extensive knowledge of the framework's architecture.
  78. Also, if the same gesture is needed in another application that is based on
  79. another framework, the detection logic has to be translated for use in that
  80. framework. Nevertheless, application frameworks are a necessity when it comes
  81. to fast, cross-platform development. A generic architecture design should aim
  82. to be compatible with existing frameworks, and provide a way to detect and
  83. extend gestures independent of the framework.
  84. Application frameworks are written in a specific programming language. To
  85. support multiple frameworks and programming languages, the architecture should
  86. be accessible for applications using a language-independent method of
  87. communication. This intention leads towards the concept of a dedicated gesture
  88. detection application that serves gestures to multiple applications at the same
  89. time.
  90. The scope of this thesis is limited to the detection of gestures on multi-touch
  91. surface devices. It presents a design for a generic gesture detection
  92. architecture for use in multi-touch based applications. A reference
  93. implementation of this design is used in some test case applications, whose
  94. goal is to test the effectiveness of the design and detect its shortcomings.
  95. \section{Structure of this document}
  96. % TODO: pas als thesis af is
  97. \chapter{Related work}
  98. \section{Gesture and Activity Recognition Toolkit}
  99. The Gesture and Activity Recognition Toolkit (GART) \cite{GART} is a
  100. toolkit for the development of gesture-based applications. The toolkit
  101. states that the best way to classify gestures is to use machine learning.
  102. The programmer trains a program to recognize using the machine learning
  103. library from the toolkit. The toolkit contains a callback mechanism that
  104. the programmer uses to execute custom code when a gesture is recognized.
  105. Though multi-touch input is not directly supported by the toolkit, the
  106. level of abstraction does allow for it to be implemented in the form of a
  107. ``touch'' sensor.
  108. The reason to use machine learning is the statement that gesture detection
  109. ``is likely to become increasingly complex and unmanageable'' when using a
  110. set of predefined rules to detect whether some sensor input can be seen as
  111. a specific gesture. This statement is not necessarily true. If the
  112. programmer is given a way to separate the detection of different types of
  113. gestures and flexibility in rule definitions, over-complexity can be
  114. avoided.
  115. \section{Gesture recognition implementation for Windows 7}
  116. The online article \cite{win7touch} presents a Windows 7 application,
  117. written in Microsofts .NET. The application shows detected gestures in a
  118. canvas. Gesture trackers keep track of stylus locations to detect specific
  119. gestures. The event types required to track a touch stylus are ``stylus
  120. down'', ``stylus move'' and ``stylus up'' events. A
  121. \texttt{GestureTrackerManager} object dispatches these events to gesture
  122. trackers. The application supports a limited number of pre-defined
  123. gestures.
  124. An important observation in this application is that different gestures are
  125. detected by different gesture trackers, thus separating gesture detection
  126. code into maintainable parts.
  127. \section{Analysis of related work}
  128. The simple Processing implementation of multi-touch events provides most of
  129. the functionality that can be found in existing multi-touch applications.
  130. In fact, many applications for mobile phones and tablets only use tap and
  131. scroll events. For this category of applications, using machine learning
  132. seems excessive. Though the representation of a gesture using a feature
  133. vector in a machine learning algorithm is a generic and formal way to
  134. define a gesture, a programmer-friendly architecture should also support
  135. simple, ``hard-coded'' detection code. A way to separate different pieces
  136. of gesture detection code, thus keeping a code library manageable and
  137. extendable, is to user different gesture trackers.
  138. \chapter{Design}
  139. \label{chapter:design}
  140. % Diagrams are defined in a separate file
  141. \input{data/diagrams}
  142. \section{Introduction}
  143. This chapter describes a design for a generic multi-touch gesture detection
  144. architecture. The architecture is represented as diagram of relations
  145. between different components. Sections \ref{sec:driver-support} to
  146. \ref{sec:daemon} define requirements for the architecture, and extend the
  147. diagram with components that meet these requirements. Section
  148. \ref{sec:example} describes an example usage of the architecture in an
  149. application.
  150. The input of the architecture comes from a multi-touch device driver.
  151. The task of the architecture is to translate this input to multi-touch
  152. gestures that are used by an application, as illustrated in figure
  153. \ref{fig:basicdiagram}. In the course of this chapter, the diagram is
  154. extended with the different components of the architecture.
  155. \basicdiagram
  156. \section{Supporting multiple drivers}
  157. \label{sec:driver-support}
  158. The TUIO protocol \cite{TUIO} is an example of a driver that can be used by
  159. multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate
  160. low-level touch events (see appendix \ref{app:tuio} for more details).
  161. These messages are specific to the API of the TUIO protocol. Other drivers
  162. may use different messages types. To support more than one driver in the
  163. architecture, there must be some translation from driver-specific messages
  164. to a common format for primitive touch events. After all, the gesture
  165. detection logic in a ``generic'' architecture should not be implemented
  166. based on driver-specific messages. The event types in this format should be
  167. chosen so that multiple drivers can trigger the same events. If each
  168. supported driver would add its own set of event types to the common format,
  169. the purpose of it being ``common'' would be defeated.
  170. A minimal expectation for a touch device driver is that it detects simple
  171. touch points, with a ``point'' being an object at an $(x, y)$ position on
  172. the touch surface. This yields a basic set of events: $\{point\_down,
  173. point\_move, point\_up\}$.
  174. The TUIO protocol supports fiducials\footnote{A fiducial is a pattern used
  175. by some touch devices to identify objects.}, which also have a rotational
  176. property. This results in a more extended set: $\{point\_down, point\_move,
  177. point\_up, object\_down, object\_move, object\_up,\\ object\_rotate\}$.
  178. Due to their generic nature, the use of these events is not limited to the
  179. TUIO protocol. Another driver that can keep apart rotated objects from
  180. simple touch points could also trigger them.
  181. The component that translates driver-specific messages to common events,
  182. will be called the \emph{event driver}. The event driver runs in a loop,
  183. receiving and analyzing driver messages. When a sequence of messages is
  184. analyzed as an event, the event driver delegates the event to other
  185. components in the architecture for translation to gestures. This
  186. communication flow is illustrated in figure \ref{fig:driverdiagram}.
  187. \driverdiagram
  188. Support for a touch driver can be added by adding an event driver
  189. implementation. The choice of event driver implementation that is used in an
  190. application is dependent on the driver support of the touch device being
  191. used.
  192. Because driver implementations have a common output format in the form of
  193. events, multiple event drivers can run at the same time (see figure
  194. \ref{fig:multipledrivers}). This design feature allows low-level events
  195. from multiple devices to be aggregated into high-level gestures.
  196. \multipledriversdiagram
  197. \section{Restricting events to a screen area}
  198. \label{sec:areas}
  199. % TODO: in introduction: gestures zijn opgebouwd uit meerdere primitieven
  200. Touch input devices are unaware of the graphical input
  201. widgets\footnote{``Widget'' is a name commonly used to identify an element
  202. of a graphical user interface (GUI).} of an application, and therefore
  203. generate events that simply identify the screen location at which an event
  204. takes place. User interfaces of applications that do not run in full screen
  205. modus are contained in a window. Events which occur outside the application
  206. window should not be handled by the program in most cases. What's more,
  207. widget within the application window itself should be able to respond to
  208. different gestures. E.g. a button widget may respond to a ``tap'' gesture
  209. to be activated, whereas the application window responds to a ``pinch''
  210. gesture to be resized. In order to be able to direct a gesture to a
  211. particular widget in an application, a gesture must be restricted to the
  212. area of the screen covered by that widget. An important question is if the
  213. architecture should offer a solution to this problem, or leave the task of
  214. assigning gestures to application widgets to the application developer.
  215. If the architecture does not provide a solution, the ``Event analysis''
  216. component in figure \ref{fig:multipledrivers} receives all events that
  217. occur on the screen surface. The gesture detection logic thus uses all
  218. events as input to detect a gesture. This leaves no possibility for a
  219. gesture to occur at multiple screen positions at the same time, unless the
  220. gesture detection logic incorporates event cluster detection. The problem
  221. is illustrated in figure \ref{fig:ex1}, where two widgets on the screen can
  222. be rotated independently. The rotation detection component that detects
  223. rotation gestures receives all four fingers as input. If the two groups of
  224. finger events are not separated by cluster detection, only one rotation
  225. event will occur.
  226. \examplefigureone
  227. A gesture detection component could perform a heuristic way of cluster
  228. detection based on the distance between events. However, this method cannot
  229. guarantee that a cluster of events corresponds with a particular
  230. application widget. In short, gesture detection is difficult to implement
  231. without awareness of the location of application widgets. Moreover, the
  232. application developer still needs to direct gestures to a particular widget
  233. manually. This requires geometric calculations in the application logic,
  234. which is a tedious and error-prone task for the developer.
  235. A better solution is to group events that occur inside the area covered by
  236. a widget, before passing them on to a gesture detection component.
  237. Different gesture detection components can then detect gestures
  238. simultaneously, based on different sets of input events. An area of the
  239. screen surface will be represented by an \emph{event area}. An event area
  240. filters input events based on their location, and then delegates events to
  241. gesture detection components that are assigned to the event area. Events
  242. which are located outside the event area are not delegated to its gesture
  243. detection components.
  244. In the example of figure \ref{fig:ex1}, the two rotatable widgets can be
  245. represented by two event areas, each having a different rotation detection
  246. component.
  247. \subsection*{Callback mechanism}
  248. When a gesture is detected by a gesture detection component, it must be
  249. handled by the client application. A common way to handle events in an
  250. application is a ``callback'' mechanism: the application developer binds a
  251. function to an event, that is called when the event occurs. Because of the
  252. familiarity of this concept with developers, the architecture uses a
  253. callback mechanism to handle gestures in an application. Since an area
  254. controls the grouping of events and thus the occurrence of gestures in an
  255. area, gesture handlers for a specific gesture type are bound to an area.
  256. Figure \ref{fig:areadiagram} shows the position of areas in the
  257. architecture.
  258. \areadiagram
  259. %Note that the boundaries of an area are only used to group events, not
  260. %gestures. A gesture could occur outside the area that contains its
  261. %originating events, as illustrated by the example in figure \ref{fig:ex2}.
  262. %\examplefiguretwo
  263. A remark must be made about the use of event areas to assign events to the
  264. detection of some gesture. The concept of an event area is based on the
  265. assumption that the set or originating events that form a particular
  266. gesture, can be determined based exclusively on the location of the events.
  267. This is a reasonable assumption for simple touch objects whose only
  268. parameter is a position, such as a pen or a human finger. However, more
  269. complex touch objects can have additional parameters, such as rotational
  270. orientation or color. An even more generic concept is the \emph{event
  271. filter}, which detects whether an event should be assigned to a particular
  272. gesture detection component based on all available parameters. This level of
  273. abstraction allows for constraints like ``Use all blue objects within a
  274. widget for rotation, and green objects for dragging.''. As mentioned in the
  275. introduction chapter [\ref{chapter:introduction}], the scope of this thesis
  276. is limited to multi-touch surface based devices, for which the \emph{event
  277. area} concept suffices. Section \ref{sec:eventfilter} explores the
  278. possibility of event areas to be replaced with event filters.
  279. \subsection{Area tree}
  280. \label{sec:tree}
  281. The most simple usage of event areas in the architecture would be a list of
  282. event areas. When the event driver delegates an event, it is accepted by
  283. each event area that contains the event coordinates.
  284. If the architecture were to be used in combination with an application
  285. framework like GTK \cite{GTK}, each GTK widget that responds to gestures
  286. should have a mirroring event area that synchronizes its location with that
  287. of the widget. Consider a panel with five buttons that all listen to a
  288. ``tap'' event. If the location of the panel changes as a result of movement
  289. of the application window, the positions of all buttons have to be updated
  290. too.
  291. This process is simplified by the arrangement of event areas in a tree
  292. structure. A root event area represents the panel, containing five other
  293. event areas which are positioned relative to the root area. The relative
  294. positions do not need to be updated when the panel area changes its
  295. position. GUI frameworks, like GTK, use this kind of tree structure to
  296. manage graphical widgets.
  297. If the GUI toolkit provides an API for requesting the position and size of
  298. a widget, a recommended first step when developing an application is to
  299. create some subclass of the area that automatically synchronizes with the
  300. position of a widget from the GUI framework.
  301. \subsection{Event propagation}
  302. \label{sec:eventpropagation}
  303. A problem occurs when event areas overlap, as shown by figure
  304. \ref{fig:eventpropagation}. When the white square is rotated, the gray
  305. square should keep its current orientation. This means that events that are
  306. used for rotation of the white square, should not be used for rotation of
  307. the gray square. The use of event areas alone does not provide a solution
  308. here, since both the gray and the white event area accept an event that
  309. occurs within the white square.
  310. The problem described above is a common problem in GUI applications, and
  311. there is a common solution (used by GTK \cite{gtkeventpropagation}, among
  312. others). An event is passed to an ``event handler''. If the handler returns
  313. \texttt{true}, the event is considered ``handled'' and is not
  314. ``propagated'' to other widgets.
  315. Applied to the example of the rotating squares, the rotation detection
  316. component of the white square should stop the propagation of events to the
  317. event area of the gray square. This is illustrated in figure
  318. \ref{fig:eventpropagation}.
  319. In the example, rotation of the white square has priority over rotation of
  320. the gray square because the white area is the widget actually being touched
  321. at the screen surface. In general, events should be delegated to event
  322. areas according to the order in which the event areas are positioned over
  323. each other. The tree structure in which event areas are arranged, is an
  324. ideal tool to determine the order in which an event is delegated. Event
  325. areas in deeper layers of the tree are positioned on top of their parent.
  326. An object touching the screen is essentially touching the deepest event
  327. area in the tree that contains the triggered event. That event area should
  328. be the first to delegate the event to its gesture detection components, and
  329. then propagate the event up in the tree to its ancestors. A gesture
  330. detection component can stop the propagation of the event.
  331. An additional type of event propagation is ``immediate propagation'', which
  332. indicates propagation of an event from one gesture detection component to
  333. another. This is applicable when an event area uses more than one gesture
  334. detection component. One of the components can stop the immediate
  335. propagation of an event, so that the event is not passed to the next
  336. gesture detection component, nor to the ancestors of the event area.
  337. When regular propagation is stopped, the event is propagated to other
  338. gesture detection components first, before actually being stopped.
  339. \eventpropagationfigure
  340. \newpage
  341. \section{Detecting gestures from events}
  342. \label{sec:gesture-detection}
  343. The low-level events that are grouped by an event area must be translated
  344. to high-level gestures in some way. Simple gestures, such as a tap or the
  345. dragging of an element using one finger, are easy to detect by comparing
  346. the positions of sequential $point\_down$ and $point\_move$ events. More
  347. complex gestures, like the writing of a character from the alphabet,
  348. require more advanced detection algorithms.
  349. A way to detect complex gestures based on a sequence of input features
  350. is with the use of machine learning methods, such as Hidden Markov Models
  351. \footnote{A Hidden Markov Model (HMM) is a statistical model without a
  352. memory, it can be used to detect gestures based on the current input state
  353. alone.} \cite{conf/gw/RigollKE97}. A sequence of input states can be mapped
  354. to a feature vector that is recognized as a particular gesture with a
  355. certain probability. An advantage of using machine learning with respect to
  356. an imperative programming style is that complex gestures can be described
  357. without the use of explicit detection logic. For example, the detection of
  358. the character `A' being written on the screen is difficult to implement
  359. using an imperative programming style, while a trained machine learning
  360. system can produce a match with relative ease.
  361. Sequences of events that are triggered by a multi-touch based surfaces are
  362. often of a manageable complexity. An imperative programming style is
  363. sufficient to detect many common gestures, like rotation and dragging. The
  364. imperative programming style is also familiar and understandable for a wide
  365. range of application developers. Therefore, the architecture should support
  366. an imperative style of gesture detection.
  367. A problem with the imperative programming style is that the explicit
  368. detection of different gestures requires different gesture detection
  369. components. If these components is not managed well, the detection logic is
  370. prone to become chaotic and over-complex.
  371. To manage complexity and support multiple methods of gesture detection, the
  372. architecture has adopted the tracker-based design as described by
  373. \cite{win7touch}. Different detection components are wrapped in separate
  374. gesture tracking units, or \emph{gesture trackers}. The input of a gesture
  375. tracker is provided by an event area in the form of events. When a gesture
  376. tracker detects a gesture, this gesture is triggered in the corresponding
  377. event area. The event area then calls the callbacks which are bound to the
  378. gesture type by the application. Figure \ref{fig:trackerdiagram} shows the
  379. position of gesture trackers in the architecture.
  380. \trackerdiagram
  381. The use of gesture trackers as small detection units provides extendability
  382. of the architecture. A developer can write a custom gesture tracker and
  383. register it in the architecture. The tracker can use any type of detection
  384. logic internally, as long as it translates events to gestures.
  385. An example of a possible gesture tracker implementation is a
  386. ``transformation tracker'' that detects rotation, scaling and translation
  387. gestures.
  388. \section{Serving multiple applications}
  389. \label{sec:daemon}
  390. The design of the architecture is essentially complete with the components
  391. specified in this chapter. However, one specification has not yet been
  392. discussed: the ability address the architecture using a method of
  393. communication independent of the application programming language.
  394. If an application must start the architecture instance in a thread within
  395. the application itself, the architecture is required to be compatible with
  396. the programming language used to write the application. To overcome the
  397. language barrier, an instance of the architecture would have to run in a
  398. separate process.
  399. A common and efficient way of communication between two separate processes
  400. is through the use of a network protocol. In this particular case, the
  401. architecture can run as a daemon\footnote{``daemon'' is a name Unix uses to
  402. indicate that a process runs as a background process.} process, listening
  403. to driver messages and triggering gestures in registered applications.
  404. \vspace{-0.3em}
  405. \daemondiagram
  406. An advantage of a daemon setup is that is can serve multiple applications
  407. at the same time. Alternatively, each application that uses gesture
  408. interaction would start its own instance of the architecture in a separate
  409. process, which would be less efficient.
  410. \section{Example usage}
  411. \label{sec:example}
  412. This section describes an extended example to illustrate the data flow of
  413. the architecture. The example application listens to tap events on a button
  414. within an application window. The window also contains a draggable circle.
  415. The application window can be resized using \emph{pinch} gestures. Figure
  416. \ref{fig:examplediagram} shows the architecture created by the pseudo code
  417. below.
  418. \begin{verbatim}
  419. initialize GUI framework, creating a window and nessecary GUI widgets
  420. create a root area that synchronizes position and size with the application window
  421. define 'rotation' gesture handler and bind it to the root area
  422. create an area with the position and radius of the circle
  423. define 'drag' gesture handler and bind it to the circle area
  424. create an area with the position and size of the button
  425. define 'tap' gesture handler and bind it to the button area
  426. create a new event server and assign the created root area to it
  427. start the event server in a new thread
  428. start the GUI main loop in the current thread
  429. \end{verbatim}
  430. \examplediagram
  431. \chapter{Test applications}
  432. A reference implementation of the design has been written in Python. Two test
  433. applications have been created to test if the design ``works'' in a practical
  434. application, and to detect its flaws. One application is mainly used to test
  435. the gesture tracker implementations. The other program uses areas in a tree,
  436. demonstrating event delegation and propagation.
  437. To test multi-touch interaction properly, a multi-touch device is required. The
  438. University of Amsterdam (UvA) has provided access to a multi-touch table from
  439. PQlabs. The table uses the TUIO protocol \cite{TUIO} to communicate touch
  440. events. See appendix \ref{app:tuio} for details regarding the TUIO protocol.
  441. %The reference implementation and its test applications are a Proof of Concept,
  442. %meant to show that the architecture design is effective.
  443. %that translates TUIO messages to some common multi-touch gestures.
  444. \section{Reference implementation}
  445. \label{sec:implementation}
  446. The reference implementation is written in Python and available at
  447. \cite{gitrepos}. The following component implementations are included:
  448. \textbf{Event drivers}
  449. \begin{itemize}
  450. \item TUIO driver, using only the support for simple touch points with an
  451. $(x, y)$ position.
  452. \end{itemize}
  453. \textbf{Gesture trackers}
  454. \begin{itemize}
  455. \item Basic tracker, supports $point\_down,~point\_move,~point\_up$ gestures.
  456. \item Tap tracker, supports $tap,~single\_tap,~double\_tap$ gestures.
  457. \item Transformation tracker, supports $rotate,~pinch,~drag$ gestures.
  458. \end{itemize}
  459. \textbf{Areas}
  460. \begin{itemize}
  461. \item Circular area
  462. \item Rectangular area
  463. \item Full screen area
  464. \end{itemize}
  465. The implementation does not include a network protocol to support the daemon
  466. setup as described in section \ref{sec:daemon}. Therefore, it is only usable in
  467. Python programs. Thus, the two test programs are also written in Python.
  468. The area implementations contain some geometric functions to determine whether
  469. an event should be delegated to an area. All gesture trackers have been
  470. implemented using an imperative programming style. Technical details about the
  471. implementation of gesture detection are described in appendix
  472. \ref{app:implementation-details}.
  473. \section{Full screen Pygame program}
  474. %The goal of this program was to experiment with the TUIO
  475. %protocol, and to discover requirements for the architecture that was to be
  476. %designed. When the architecture design was completed, the program was rewritten
  477. %using the new architecture components. The original variant is still available
  478. %in the ``experimental'' folder of the Git repository \cite{gitrepos}.
  479. An implementation of the detection of some simple multi-touch gestures (single
  480. tap, double tap, rotation, pinch and drag) using Processing\footnote{Processing
  481. is a Java-based programming environment with an export possibility for Android.
  482. See also \cite{processing}.} can be found in a forum on the Processing website
  483. \cite{processingMT}. The program has been ported to Python and adapted to
  484. receive input from the TUIO protocol. The implementation is fairly simple, but
  485. it yields some appealing results (see figure \ref{fig:draw}). In the original
  486. program, the detection logic of all gestures is combined in a single class
  487. file. As predicted by the GART article \cite{GART}, this leads to over-complex
  488. code that is difficult to read and debug.
  489. The application has been rewritten using the reference implementation of the
  490. architecture. The detection code is separated into two different gesture
  491. trackers, which are the ``tap'' and ``transformation'' trackers mentioned in
  492. section \ref{sec:implementation}.
  493. The application receives TUIO events and translates them to \emph{point\_down},
  494. \emph{point\_move} and \emph{point\_up} events. These events are then
  495. interpreted to be \emph{single tap}, \emph{double tap}, \emph{rotation} or
  496. \emph{pinch} gestures. The positions of all touch objects are drawn using the
  497. Pygame library. Since the Pygame library does not provide support to find the
  498. location of the display window, the root area captures events in the entire
  499. screens surface. The application can be run either full screen or in windowed
  500. mode. If windowed, screen-wide gesture coordinates are mapped to the size of
  501. the Pyame window. In other words, the Pygame window always represents the
  502. entire touch surface. The output of the program can be seen in figure
  503. \ref{fig:draw}.
  504. \begin{figure}[h!]
  505. \center
  506. \includegraphics[scale=0.4]{data/pygame_draw.png}
  507. \caption{Output of the experimental drawing program. It draws all touch
  508. points and their centroid on the screen (the centroid is used for rotation
  509. and pinch detection). It also draws a green rectangle which
  510. \label{fig:draw}
  511. responds to rotation and pinch events.}
  512. \end{figure}
  513. \section{GTK/Cairo program}
  514. The second test application uses the GIMP toolkit (GTK+) \cite{GTK} to create
  515. its user interface. Since GTK+ defines a main event loop that is started in
  516. order to use the interface, the architecture implementation runs in a separate
  517. thread. The application creates a main window, whose size and position are
  518. synchronized with the root area of the architecture.
  519. % TODO
  520. \emph{TODO: uitbreiden en screenshots erbij (dit programma is nog niet af)}
  521. \section{Discussion}
  522. % TODO
  523. \emph{TODO: Tekortkomingen aangeven die naar voren komen uit de tests}
  524. % Verschillende apparaten/drivers geven een ander soort primitieve events af.
  525. % Een vertaling van deze device-specifieke events naar een algemeen formaat van
  526. % events is nodig om gesture detection op een generieke manier te doen.
  527. % Door input van meerdere drivers door dezelfde event driver heen te laten gaan
  528. % is er ondersteuning voor meerdere apparaten tegelijkertijd.
  529. % Event driver levert low-level events. niet elke event hoort bij elke gesture,
  530. % dus moet er een filtering plaatsvinden van welke events bij welke gesture
  531. % horen. Areas geven de mogelijkheid hiervoor op apparaten waarvan het
  532. % filteren locatiegebonden is.
  533. % Het opsplitsten van gesture detection voor gesture trackers is een manier om
  534. % flexibel te zijn in ondersteunde types detection logic, en het beheersbaar
  535. % houden van complexiteit.
  536. \chapter{Suggestions for future work}
  537. \section{A generic method for grouping events}
  538. \label{sec:eventfilter}
  539. As mentioned in section \ref{sec:areas}, the concept of an \emph{area} is based
  540. on the assumption that the set or originating events that form a particular
  541. gesture, can be determined based exclusively on the location of the events.
  542. Since this thesis focuses on multi-touch surface based devices, and every
  543. object on a multi-touch surface has a position, this assumption is valid.
  544. However, the design of the architecture is meant to be more generic; to provide
  545. a structured design of managing gesture detection.
  546. An in-air gesture detection device, such as the Microsoft Kinect \cite{kinect},
  547. provides 3D positions. Some multi-touch tables work with a camera that can also
  548. determine the shape and rotational orientation of objects touching the surface.
  549. For these devices, events delegated by the event driver have more parameters
  550. than a 2D position alone. The term ``area'' is not suitable to describe a group
  551. of events that consist of these parameters.
  552. A more generic term for a component that groups similar events is the
  553. \emph{event filter}. The concept of an event filter is based on the same
  554. principle as areas, which is the assumption that gestures are formed from a
  555. subset of all events. However, an event filter takes all parameters of an event
  556. into account. An application on the camera-based multi-touch table could be to
  557. group all objects that are triangular into one filter, and all rectangular
  558. objects into another. Or, to separate small finger tips from large ones to be
  559. able to recognize whether a child or an adult touches the table.
  560. \section{Using a state machine for gesture detection}
  561. All gesture trackers in the reference implementation are based on the explicit
  562. analysis of events. Gesture detection is a widely researched subject, and the
  563. separation of detection logic into different trackers allows for multiple types
  564. of gesture detection in the same architecture. An interesting question is
  565. whether multi-touch gestures can be described in a formal way so that explicit
  566. detection code can be avoided.
  567. \cite{GART} and \cite{conf/gw/RigollKE97} propose the use of machine learning
  568. to recognizes gestures. To use machine learning, a set of input events forming
  569. a particular gesture must be represented as a feature vector. A learning set
  570. containing a set of feature vectors that represent some gesture ``teaches'' the
  571. machine what the feature of the gesture looks like.
  572. An advantage of using explicit gesture detection code is the fact that it
  573. provides a flexible way to specify the characteristics of a gesture, whereas
  574. the performance of feature vector-based machine learning is dependent on the
  575. quality of the learning set.
  576. A better method to describe a gesture might be to specify its features as a
  577. ``signature''. The parameters of such a signature must be be based on input
  578. events. When a set of input events matches the signature of some gesture, the
  579. gesture is be triggered. A gesture signature should be a complete description
  580. of all requirements the set of events must meet to form the gesture.
  581. A way to describe signatures on a multi-touch surface can be by the use of a
  582. state machine of its touch objects. The states of a simple touch point could be
  583. ${down, move, up, hold}$ to indicate respectively that a point is put down, is
  584. being moved, is held on a position for some time, and is released. In this
  585. case, a ``drag'' gesture can be described by the sequence $down - move - up$
  586. and a ``select'' gesture by the sequence $down - hold$. If the set of states is
  587. not sufficient to describe a desired gesture, a developer can add additional
  588. states. For example, to be able to make a distinction between an element being
  589. ``dragged'' or ``thrown'' in some direction on the screen, two additional
  590. states can be added: ${start, stop}$ to indicate that a point starts and stops
  591. moving. The resulting state transitions are sequences $down - start - move -
  592. stop - up$ and $down - start - move - up$ (the latter does not include a $stop$
  593. to indicate that the element must keep moving after the gesture had been
  594. performed).
  595. An additional way to describe even more complex gestures is to use other
  596. gestures in a signature. An example is to combine $select - drag$ to specify
  597. that an element must be selected before it can be dragged.
  598. The application of a state machine to describe multi-touch gestures is an
  599. subject well worth exploring in the future.
  600. \section{Daemon implementation}
  601. Section \ref{sec:daemon} proposes the usage of a network protocol to
  602. communicate between an architecture implementation and (multiple) gesture-based
  603. applications, as illustrated in figure \ref{fig:daemon}. The reference
  604. implementation does not support network communication. If the architecture
  605. design is to become successful in the future, the implementation of network
  606. communication is a must. ZeroMQ (or $\emptyset$MQ) \cite{ZeroMQ} is a
  607. high-performance software library with support for a wide range of programming
  608. languages. A good basis for a future implementation could use this library as
  609. the basis for its communication layer.
  610. If an implementation of the architecture will be released, a good idea would be
  611. to do so within a community of application developers. A community can
  612. contribute to a central database of gesture trackers, making the interaction
  613. from their applications available for use other applications.
  614. Ideally, a user can install a daemon process containing the architecture so
  615. that it is usable for any gesture-based application on the device. Applications
  616. that use the architecture can specify it as being a software dependency, or
  617. include it in a software distribution.
  618. \bibliographystyle{plain}
  619. \bibliography{report}{}
  620. \appendix
  621. \chapter{The TUIO protocol}
  622. \label{app:tuio}
  623. The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible
  624. objects, such as fingers or objects on a multi-touch table. Object information
  625. is sent to the TUIO UDP port (3333 by default).
  626. For efficiency reasons, the TUIO protocol is encoded using the Open Sound
  627. Control \cite[OSC]{OSC} format. An OSC server/client implementation is
  628. available for Python: pyOSC \cite{pyOSC}.
  629. A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}.
  630. However, the execution of an example script yields an error regarding Python's
  631. built-in \texttt{socket} library. Therefore, the reference implementation uses
  632. the pyOSC package to receive TUIO messages.
  633. The two most important message types of the protocol are ALIVE and SET
  634. messages. An ALIVE message contains the list of session id's that are currently
  635. ``active'', which in the case of multi-touch a table means that they are
  636. touching the screen. A SET message provides geometric information of a session
  637. id, such as position, velocity and acceleration.
  638. Each session id represents an object. The only type of objects on the
  639. multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y)
  640. position on the screen.
  641. ALIVE messages can be used to determine when an object touches and releases the
  642. screen. For example, if a session id was in the previous message but not in the
  643. current, The object it represents has been lifted from the screen.
  644. SET provide information about movement. In the case of simple (x, y) positions,
  645. only the movement vector of the position itself can be calculated. For more
  646. complex objects such as fiducials, arguments like rotational position and
  647. acceleration are also included.
  648. ALIVE and SET messages can be combined to create ``point down'', ``point move''
  649. and ``point up'' events (as used by the Windows 7 implementation
  650. \cite{win7touch}).
  651. TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left
  652. top corner of the screen and $(1.0, 1.0)$ the right bottom corner. To focus
  653. events within a window, a translation to window coordinates is required in the
  654. client application, as stated by the online specification
  655. \cite{TUIO_specification}:
  656. \begin{quote}
  657. In order to compute the X and Y coordinates for the 2D profiles a TUIO
  658. tracker implementation needs to divide these values by the actual sensor
  659. dimension, while a TUIO client implementation consequently can scale these
  660. values back to the actual screen dimension.
  661. \end{quote}
  662. \chapter{Gesture detection in the reference implementation}
  663. \label{app:implementation-details}
  664. Both rotation and pinch use the centroid of all touch points. A \emph{rotation}
  665. gesture uses the difference in angle relative to the centroid of all touch
  666. points, and \emph{pinch} uses the difference in distance. Both values are
  667. normalized using division by the number of touch points. A pinch event contains
  668. a scale factor, and therefore uses a division of the current by the previous
  669. average distance to the centroid.
  670. % TODO
  671. \emph{TODO: rotatie en pinch gaan iets anders/uitgebreider worden beschreven.}
  672. \end{document}