report.tex 50 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947
  1. \documentclass[twoside,openright]{uva-bachelor-thesis}
  2. \usepackage[english]{babel}
  3. \usepackage[utf8]{inputenc}
  4. \usepackage{hyperref,graphicx,tikz,subfigure,float}
  5. % Link colors
  6. \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen}
  7. % Title Page
  8. \title{A generic architecture for gesture-based interaction}
  9. \author{Taddeüs Kroes}
  10. \supervisors{Dr. Robert G. Belleman (UvA)}
  11. \signedby{Dr. Robert G. Belleman (UvA)}
  12. \begin{document}
  13. % Title page
  14. \maketitle
  15. \begin{abstract}
  16. Applications that use complex gesture-based interaction need to translate
  17. primitive messages from low-level device drivers to complex, high-level
  18. gestures, and map these gestures to elements in an application. This report
  19. presents a generic architecture for the detection of complex gestures in an
  20. application. The architecture translates device driver messages to a common
  21. set of ``events''. The events are then delegated to a tree of ``event
  22. areas'', which are used to separate groups of events and assign these
  23. groups to an element in the application. Gesture detection is performed on
  24. a group of events assigned to an event area, using detection units called
  25. ``gesture tackers''. An implementation of the architecture as a daemon
  26. process would be capable of serving gestures to multiple applications at
  27. the same time. A reference implementation and two test case applications
  28. have been created to test the effectiveness of the architecture design.
  29. \end{abstract}
  30. % Set paragraph indentation
  31. \parindent 0pt
  32. \parskip 1.5ex plus 0.5ex minus 0.2ex
  33. % Table of content on separate page
  34. \tableofcontents
  35. \chapter{Introduction}
  36. \label{chapter:introduction}
  37. Surface-touch devices have evolved from pen-based tablets to single-touch
  38. trackpads, to multi-touch devices like smartphones and tablets. Multi-touch
  39. devices enable a user to interact with software using hand gestures, making the
  40. interaction more expressive and intuitive. These gestures are more complex than
  41. primitive ``click'' or ``tap'' events that are used by single-touch devices.
  42. Some examples of more complex gestures are ``pinch''\footnote{A ``pinch''
  43. gesture is formed by performing a pinching movement with multiple fingers on a
  44. multi-touch surface. Pinch gestures are often used to zoom in or out on an
  45. object.} and ``flick''\footnote{A ``flick'' gesture is the act of grabbing an
  46. object and throwing it in a direction on a touch surface, giving it momentum to
  47. move for some time after the hand releases the surface.} gestures.
  48. The complexity of gestures is not limited to navigation in smartphones. Some
  49. multi-touch devices are already capable of recognizing objects touching the
  50. screen \cite[Microsoft Surface]{mssurface}. In the near future, touch screens
  51. will possibly be extended or even replaced with in-air interaction (Microsoft's
  52. Kinect \cite{kinect} and the Leap \cite{leap}).
  53. The interaction devices mentioned above generate primitive events. In the case
  54. of surface-touch devices, these are \emph{down}, \emph{move} and \emph{up}
  55. events. Application programmers who want to incorporate complex, intuitive
  56. gestures in their application face the challenge of interpreting these
  57. primitive events as gestures. With the increasing complexity of gestures, the
  58. complexity of the logic required to detect these gestures increases as well.
  59. This challenge limits, or even deters the application developer to use complex
  60. gestures in an application.
  61. The main question in this research project is whether a generic architecture
  62. for the detection of complex interaction gestures can be designed, with the
  63. capability of managing the complexity of gesture detection logic. The ultimate
  64. goal would be to create an implementation of this architecture that can be
  65. extended to support a wide range of complex gestures. With the existence of
  66. such an implementation, application developers do not need to reinvent gesture
  67. detection for every new gesture-based application.
  68. \section{Contents of this document}
  69. The scope of this thesis is limited to the detection of gestures on
  70. multi-touch surface devices. It presents a design for a generic gesture
  71. detection architecture for use in multi-touch based applications. A
  72. reference implementation of this design is used in some test case
  73. applications, whose purpose is to test the effectiveness of the design and
  74. detect its shortcomings.
  75. Chapter \ref{chapter:related} describes related work that inspired a design
  76. for the architecture. The design is presented in chapter
  77. \ref{chapter:design}. Chapter \ref{chapter:testapps} presents a reference
  78. implementation of the architecture, and two test case applications that
  79. show the practical use of its components as presented in chapter
  80. \ref{chapter:design}. Finally, some suggestions for future research on the
  81. subject are given in chapter \ref{chapter:futurework}.
  82. \chapter{Related work}
  83. \label{chapter:related}
  84. Applications that use gesture-based interaction need a graphical user
  85. interface (GUI) on which gestures can be performed. The creation of a GUI
  86. is a platform-specific task. For instance, Windows and Linux support
  87. different window managers. To create a window in a platform-independent
  88. application, the application would need to include separate functionalities
  89. for supported platforms. For this reason, GUI-based applications are often
  90. built on top of an application framework that abstracts platform-specific
  91. tasks. Frameworks often include a set of tools and events that help the
  92. developer to easily build advanced GUI widgets.
  93. % Existing frameworks (and why they're not good enough)
  94. Some frameworks, such as Nokia's Qt \cite{qt}, provide support for basic
  95. multi-touch gestures like tapping, rotation or pinching. However, the
  96. detection of gestures is embedded in the framework code in an inseparable
  97. way. Consequently, an application developer who wants to use multi-touch
  98. interaction in an application, is forced to use an application framework
  99. that includes support for those multi-touch gestures that are required by
  100. the application. Kivy \cite{kivy} is a GUI framework for Python
  101. applications, with support for multi-touch gestures. It uses a basic
  102. gesture detection algorithm that allows developers to define custom
  103. gestures to some degree \cite{kivygesture} using a set of touch point
  104. coordinates. However, these frameworks do not provide support for extension
  105. with custom complex gestures.
  106. Many frameworks are also device-specific, meaning that they are developed
  107. for use on either a tablet, smartphone, PC or other device. OpenNI
  108. \cite{OpenNI2010}, for example, provides API's for only natural interaction
  109. (NI) devices such as webcams and microphones. The concept of complex
  110. gesture-based interaction, however, is applicable to a much wider set of
  111. devices. VRPN \cite{VRPN} provides a software library that abstracts the
  112. output of devices, which enables it to support a wide set of devices used
  113. in Virtual Reality (VR) interaction. The framework makes the low-level
  114. events of these devices accessible in a client application using network
  115. communication. Gesture detection is not included in VRPN.
  116. % Methods of gesture detection
  117. The detection of high-level gestures from low-level events can be
  118. approached in several ways. GART \cite{GART} is a toolkit for the
  119. development of gesture-based applications, which states that the best way
  120. to classify gestures is to use machine learning. The programmer trains an
  121. application to recognize gestures using a machine learning library from the
  122. toolkit. Though multi-touch input is not directly supported by the toolkit,
  123. the level of abstraction does allow for it to be implemented in the form of
  124. a ``touch'' sensor. The reason to use machine learning is that gesture
  125. detection ``is likely to become increasingly complex and unmanageable''
  126. when using a predefined set of rules to detect whether some sensor input
  127. can be classified as a specific gesture.
  128. The alternative to machine learning is to define a predefined set of rules
  129. for each gesture. Manoj Kumar \cite{win7touch} presents a Windows 7
  130. application, written in Microsofts .NET, which detects a set of basic
  131. directional gestures based on the movement of a stylus. The complexity of
  132. the code is managed by the separation of different gesture types in
  133. different detection units called ``gesture trackers''. The application
  134. shows that predefined gesture detection rules do not necessarily produce
  135. unmanageable code.
  136. \section{Analysis of related work}
  137. Implementations for the support of complex gesture based interaction do
  138. already exist. However, gesture detection in these implementations is
  139. device-specific (Nokia Qt and OpenNI) or limited to use within an
  140. application framework (Kivy).
  141. An abstraction of device output allows VRPN and GART to support multiple
  142. devices. However, VRPN does not incorporate gesture detection. GART does,
  143. but only in the form of machine learning algorithms. Many applications for
  144. mobile phones and tablets only use simple gestures such as taps. For this
  145. category of applications, machine learning is an excessively complex method
  146. of gesture detection. Manoj Kumar shows that when managed well, a
  147. predefined set of gesture detection rules is sufficient to detect simple
  148. gestures.
  149. This thesis explores the possibility to create an architecture that
  150. combines support for multiple input devices with different methods of
  151. gesture detection.
  152. \chapter{Design}
  153. \label{chapter:design}
  154. % Diagrams are defined in a separate file
  155. \input{data/diagrams}
  156. \section{Introduction}
  157. Application frameworks are a necessity when it comes to fast,
  158. cross-platform development. A generic architecture design should aim to be
  159. compatible with existing frameworks, and provide a way to detect and extend
  160. gestures independent of the framework. Since an application framework is
  161. written in a specific programming language, the architecture should be
  162. accessible for applications using a language-independent method of
  163. communication. This intention leads towards the concept of a dedicated
  164. gesture detection application that serves gestures to multiple applications
  165. at the same time.
  166. This chapter describes a design for such an architecture. The architecture
  167. components are shown by figure \ref{fig:fulldiagram}. Sections
  168. \ref{sec:multipledrivers} to \ref{sec:daemon} explain the use of all
  169. components in detail.
  170. \fulldiagram
  171. \newpage
  172. \section{Supporting multiple drivers}
  173. \label{sec:multipledrivers}
  174. The TUIO protocol \cite{TUIO} is an example of a driver that can be used by
  175. multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate
  176. low-level touch events (see appendix \ref{app:tuio} for more details).
  177. These messages are specific to the API of the TUIO protocol. Other drivers
  178. may use different messages types. To support more than one driver in the
  179. architecture, there must be some translation from device-specific messages
  180. to a common format for primitive touch events. After all, the gesture
  181. detection logic in a ``generic'' architecture should not be implemented
  182. based on device-specific messages. The event types in this format should be
  183. chosen so that multiple drivers can trigger the same events. If each
  184. supported driver would add its own set of event types to the common format,
  185. the purpose of it being ``common'' would be defeated.
  186. A minimal expectation for a touch device driver is that it detects simple
  187. touch points, with a ``point'' being an object at an $(x, y)$ position on
  188. the touch surface. This yields a basic set of events: $\{point\_down,
  189. point\_move, point\_up\}$.
  190. The TUIO protocol supports fiducials\footnote{A fiducial is a pattern used
  191. by some touch devices to identify objects.}, which also have a rotational
  192. property. This results in a more extended set: $\{point\_down, point\_move,
  193. point\_up, object\_down, object\_move, object\_up,\\ object\_rotate\}$.
  194. Due to their generic nature, the use of these events is not limited to the
  195. TUIO protocol. Another driver that can keep apart rotated objects from
  196. simple touch points could also trigger them.
  197. The component that translates device-specific messages to common events,
  198. will be called the \emph{event driver}. The event driver runs in a loop,
  199. receiving and analyzing driver messages. When a sequence of messages is
  200. analyzed as an event, the event driver delegates the event to other
  201. components in the architecture for translation to gestures.
  202. Support for a touch driver can be added by adding an event driver
  203. implementation. The choice of event driver implementation that is used in an
  204. application is dependent on the driver support of the touch device being
  205. used.
  206. Because driver implementations have a common output format in the form of
  207. events, multiple event drivers can be used at the same time (see figure
  208. \ref{fig:multipledrivers}). This design feature allows low-level events
  209. from multiple devices to be aggregated into high-level gestures.
  210. \multipledriversdiagram
  211. \section{Event areas: connecting gesture events to widgets}
  212. \label{sec:areas}
  213. Touch input devices are unaware of the graphical input
  214. widgets\footnote{``Widget'' is a name commonly used to identify an element
  215. of a graphical user interface (GUI).} rendered by an application, and
  216. therefore generate events that simply identify the screen location at which
  217. an event takes place. User interfaces of applications that do not run in
  218. full screen modus are contained in a window. Events which occur outside the
  219. application window should not be handled by the application in most cases.
  220. What's more, a widget within the application window itself should be able
  221. to respond to different gestures. E.g. a button widget may respond to a
  222. ``tap'' gesture to be activated, whereas the application window responds to
  223. a ``pinch'' gesture to be resized. In order to be able to direct a gesture
  224. to a particular widget in an application, a gesture must be restricted to
  225. the area of the screen covered by that widget. An important question is if
  226. the architecture should offer a solution to this problem, or leave the task
  227. of assigning gestures to application widgets to the application developer.
  228. If the architecture does not provide a solution, the ``gesture detection''
  229. component in figure \ref{fig:fulldiagram} receives all events that occur on
  230. the screen surface. The gesture detection logic thus uses all events as
  231. input to detect a gesture. This leaves no possibility for a gesture to
  232. occur at multiple screen positions at the same time. The problem is
  233. illustrated in figure \ref{fig:ex1}, where two widgets on the screen can be
  234. rotated independently. The rotation detection component that detects
  235. rotation gestures receives all four fingers as input. If the two groups of
  236. finger events are not separated by cluster detection, only one rotation
  237. event will occur.
  238. \examplefigureone
  239. A gesture detection component could perform a heuristic way of cluster
  240. detection based on the distance between events. However, this method cannot
  241. guarantee that a cluster of events corresponds with a particular
  242. application widget. In short, a gesture detection component is difficult to
  243. implement without awareness of the location of application widgets.
  244. Secondly, the application developer still needs to direct gestures to a
  245. particular widget manually. This requires geometric calculations in the
  246. application logic, which is a tedious and error-prone task for the
  247. developer.
  248. The architecture described here groups events that occur inside the area
  249. covered by a widget, before passing them on to a gesture detection
  250. component. Different gesture detection components can then detect gestures
  251. simultaneously, based on different sets of input events. An area of the
  252. screen surface is represented by an \emph{event area}. An event area
  253. filters input events based on their location, and then delegates events to
  254. gesture detection components that are assigned to the event area. Events
  255. which are located outside the event area are not delegated to its gesture
  256. detection components.
  257. In the example of figure \ref{fig:ex1}, the two rotatable widgets can be
  258. represented by two event areas, each having a different rotation detection
  259. component. Each event area can consist of four corner locations of the
  260. square it represents. To detect whether an event is located inside a
  261. square, the event areas use a point-in-polygon (PIP) test \cite{PIP}. It is
  262. the task of the client application to update the corner locations of the
  263. event area with those of the widget.
  264. \subsection{Callback mechanism}
  265. When a gesture is detected by a gesture detection component, it must be
  266. handled by the client application. A common way to handle events in an
  267. application is a ``callback'' mechanism: the application developer binds a
  268. function to an event, that is called when the event occurs. Because of the
  269. familiarity of this concept with developers, the architecture uses a
  270. callback mechanism to handle gestures in an application. Callback handlers
  271. are bound to event areas, since events areas controls the grouping of
  272. events and thus the occurrence of gestures in an area of the screen.
  273. \subsection{Area tree}
  274. \label{sec:tree}
  275. A basic usage of event areas in the architecture would be a list of event
  276. areas. When the event driver delegates an event, it is accepted by each
  277. event area that contains the event coordinates.
  278. If the architecture were to be used in combination with an application
  279. framework, each widget that responds to gestures should have a mirroring
  280. event area that synchronizes its location with that of the widget. Consider
  281. a panel with five buttons that all listen to a ``tap'' event. If the
  282. location of the panel changes as a result of movement of the application
  283. window, the positions of all buttons have to be updated too.
  284. This process is simplified by the arrangement of event areas in a tree
  285. structure. A root event area represents the panel, containing five other
  286. event areas which are positioned relative to the root area. The relative
  287. positions do not need to be updated when the panel area changes its
  288. position. GUI frameworks use this kind of tree structure to manage
  289. graphical widgets.
  290. If the GUI toolkit provides an API for requesting the position and size of
  291. a widget, a recommended first step when developing an application is to
  292. create a subclass of the area that automatically synchronizes with the
  293. position of a widget from the GUI framework. For example, the test
  294. application described in section \ref{sec:testapp} extends the GTK
  295. \cite{GTK} application window widget with the functionality of a
  296. rectangular event area, to direct touch events to an application window.
  297. \subsection{Event propagation}
  298. \label{sec:eventpropagation}
  299. Another problem occurs when event areas overlap, as shown by figure
  300. \ref{fig:eventpropagation}. When the white square is dragged, the gray
  301. square should stay at its current position. This means that events that are
  302. used for dragging of the white square, should not be used for dragging of
  303. the gray square. The use of event areas alone does not provide a solution
  304. here, since both the gray and the white event area accept an event that
  305. occurs within the white square.
  306. The problem described above is a common problem in GUI applications, and
  307. there is a common solution (used by GTK \cite{gtkeventpropagation}, among
  308. others). An event is passed to an ``event handler''. If the handler returns
  309. \texttt{true}, the event is considered ``handled'' and is not
  310. ``propagated'' to other widgets. Applied to the example of the draggable
  311. squares, the rotation detection component of the white square should stop
  312. the propagation of events to the event area of the gray square.
  313. In the example, rotation of the white square has priority over rotation of
  314. the gray square because the white area is the widget actually being touched
  315. at the screen surface. In general, events should be delegated to event
  316. areas according to the order in which the event areas are positioned over
  317. each other. The tree structure in which event areas are arranged, is an
  318. ideal tool to determine the order in which an event is delegated. An
  319. object touching the screen is essentially touching the deepest event area
  320. in the tree that contains the triggered event, which must be the first to
  321. receive the event. When the gesture trackers of the event area are
  322. finished with the event, it is propagated to the siblings and parent in the
  323. event area tree. Optionally, a gesture tracker can stop the propagation of
  324. the event by its corresponding event area. Figure
  325. \ref{fig:eventpropagation} demonstrates event propagation in the example of
  326. the draggable squares.
  327. \eventpropagationfigure
  328. An additional type of event propagation is ``immediate propagation'', which
  329. indicates propagation of an event from one gesture detection component to
  330. another. This is applicable when an event area uses more than one gesture
  331. detection component. When regular propagation is stopped, the event is
  332. propagated to other gesture detection components first, before actually
  333. being stopped. One of the components can also stop the immediate
  334. propagation of an event, so that the event is not passed to the next
  335. gesture detection component, nor to the ancestors of the event area.
  336. The concept of an event area is based on the assumption that the set of
  337. originating events that form a particular gesture, can be determined based
  338. exclusively on the location of the events. This is a reasonable assumption
  339. for simple touch objects whose only parameter is a position, such as a pen
  340. or a human finger. However, more complex touch objects can have additional
  341. parameters, such as rotational orientation or color. An even more generic
  342. concept is the \emph{event filter}, which detects whether an event should
  343. be assigned to a particular gesture detection component based on all
  344. available parameters. This level of abstraction provides additional methods
  345. of interaction. For example, a camera-based multi-touch surface could make
  346. a distinction between gestures performed with a blue gloved hand, and
  347. gestures performed with a green gloved hand.
  348. As mentioned in the introduction chapter [\ref{chapter:introduction}], the
  349. scope of this thesis is limited to multi-touch surface based devices, for
  350. which the \emph{event area} concept suffices. Section \ref{sec:eventfilter}
  351. explores the possibility of event areas to be replaced with event filters.
  352. \section{Detecting gestures from low-level events}
  353. \label{sec:gesture-detection}
  354. The low-level events that are grouped by an event area must be translated
  355. to high-level gestures in some way. Simple gestures, such as a tap or the
  356. dragging of an element using one finger, are easy to detect by comparing
  357. the positions of sequential $point\_down$ and $point\_move$ events. More
  358. complex gestures, like the writing of a character from the alphabet,
  359. require more advanced detection algorithms.
  360. Sequences of events that are triggered by a multi-touch based surfaces are
  361. often of a manageable complexity. An imperative programming style is
  362. sufficient to detect many common gestures, like rotation and dragging. The
  363. imperative programming style is also familiar and understandable for a wide
  364. range of application developers. Therefore, the architecture should support
  365. this style of gesture detection. A problem with an imperative programming
  366. style is that the explicit detection of different gestures requires
  367. different gesture detection components. If these components are not managed
  368. well, the detection logic is prone to become chaotic and over-complex.
  369. A way to detect more complex gestures based on a sequence of input events,
  370. is with the use of machine learning methods, such as the Hidden Markov
  371. Models (HMM)\footnote{A Hidden Markov Model (HMM) is a statistical model
  372. without a memory, it can be used to detect gestures based on the current
  373. input state alone.} used for sign language detection by Gerhard Rigoll et
  374. al. \cite{conf/gw/RigollKE97}. A sequence of input states can be mapped to
  375. a feature vector that is recognized as a particular gesture with a certain
  376. probability. An advantage of using machine learning with respect to an
  377. imperative programming style, is that complex gestures are described
  378. without the use of explicit detection logic, thus reducing code complexity.
  379. For example, the detection of the character `A' being written on the screen
  380. is difficult to implement using explicit detection code, whereas a trained
  381. machine learning system can produce a match with relative ease.
  382. To manage complexity and support multiple styles of gesture detection
  383. logic, the architecture has adopted the tracker-based design as described
  384. by Manoj Kumar \cite{win7touch}. Different detection components are wrapped
  385. in separate gesture tracking units called \emph{gesture trackers}. The
  386. input of a gesture tracker is provided by an event area in the form of
  387. events. Each gesture detection component is wrapped in a gesture tracker
  388. with a fixed type of input and output. Internally, the gesture tracker can
  389. adopt any programming style. A character recognition component can use an
  390. HMM, whereas a tap detection component defines a simple function that
  391. compares event coordinates.
  392. When a gesture tracker detects a gesture, this gesture is triggered in the
  393. corresponding event area. The event area then calls the callbacks which are
  394. bound to the gesture type by the application.
  395. The use of gesture trackers as small detection units allows extendability
  396. of the architecture. A developer can write a custom gesture tracker and
  397. register it in the architecture. The tracker can use any type of detection
  398. logic internally, as long as it translates low-level events to high-level
  399. gestures.
  400. An example of a possible gesture tracker implementation is a
  401. ``transformation tracker'' that detects rotation, scaling and translation
  402. gestures.
  403. \section{Serving multiple applications}
  404. \label{sec:daemon}
  405. The design of the architecture is essentially complete with the components
  406. specified in this chapter. However, one specification has not yet been
  407. discussed: the ability to address the architecture using a method of
  408. communication independent of the application's programming language.
  409. If the architecture and a gesture-based application are written in the same
  410. language, the main loop of the architecture can run in a separate thread of
  411. the application. If the application is written in a different language, the
  412. architecture has to run in a separate process. Since the application needs
  413. to respond to gestures that are triggered by the architecture, there must
  414. be a communication layer between the separate processes.
  415. A common and efficient way of communication between two separate processes
  416. is through the use of a network protocol. In this particular case, the
  417. architecture can run as a daemon\footnote{``daemon'' is a name Unix uses to
  418. indicate that a process runs as a background process.} process, listening
  419. to driver messages and triggering gestures in registered applications.
  420. \vspace{-0.3em}
  421. \daemondiagram
  422. An advantage of a daemon setup is that it can serve multiple applications
  423. at the same time. Alternatively, each application that uses gesture
  424. interaction would start its own instance of the architecture in a separate
  425. process, which would be less efficient. The network communication layer
  426. also allows the architecture and a client application to run on separate
  427. machines, thus distributing computational load. The other machine may even
  428. use a different operating system.
  429. %\section{Example usage}
  430. %\label{sec:example}
  431. %
  432. %This section describes an extended example to illustrate the data flow of
  433. %the architecture. The example application listens to tap events on a button
  434. %within an application window. The window also contains a draggable circle.
  435. %The application window can be resized using \emph{pinch} gestures. Figure
  436. %\ref{fig:examplediagram} shows the architecture created by the pseudo code
  437. %below.
  438. %
  439. %\begin{verbatim}
  440. %initialize GUI framework, creating a window and nessecary GUI widgets
  441. %
  442. %create a root event area that synchronizes position and size with the application window
  443. %define 'rotation' gesture handler and bind it to the root event area
  444. %
  445. %create an event area with the position and radius of the circle
  446. %define 'drag' gesture handler and bind it to the circle event area
  447. %
  448. %create an event area with the position and size of the button
  449. %define 'tap' gesture handler and bind it to the button event area
  450. %
  451. %create a new event server and assign the created root event area to it
  452. %
  453. %start the event server in a new thread
  454. %start the GUI main loop in the current thread
  455. %\end{verbatim}
  456. %
  457. %\examplediagram
  458. \chapter{Implementation and test applications}
  459. \label{chapter:testapps}
  460. A reference implementation of the design has been written in Python. Two test
  461. applications have been created to test if the design ``works'' in a practical
  462. application, and to detect its flaws. One application is mainly used to test
  463. the gesture tracker implementations. The other application uses multiple event
  464. areas in a tree structure, demonstrating event delegation and propagation. The
  465. second application also defines a custom gesture tracker.
  466. To test multi-touch interaction properly, a multi-touch device is required. The
  467. University of Amsterdam (UvA) has provided access to a multi-touch table from
  468. PQlabs. The table uses the TUIO protocol \cite{TUIO} to communicate touch
  469. events. See appendix \ref{app:tuio} for details regarding the TUIO protocol.
  470. %The reference implementation and its test applications are a Proof of Concept,
  471. %meant to show that the architecture design is effective.
  472. %that translates TUIO messages to some common multi-touch gestures.
  473. \section{Reference implementation}
  474. \label{sec:implementation}
  475. The reference implementation is written in Python and available at
  476. \cite{gitrepos}. The following component implementations are included:
  477. \textbf{Event drivers}
  478. \begin{itemize}
  479. \item TUIO driver, using only the support for simple touch points with an
  480. $(x, y)$ position.
  481. \end{itemize}
  482. \textbf{Event areas}
  483. \begin{itemize}
  484. \item Circular area
  485. \item Rectangular area
  486. \item Polygon area
  487. \item Full screen area
  488. \end{itemize}
  489. \textbf{Gesture trackers}
  490. \begin{itemize}
  491. \item Basic tracker, supports $point\_down,~point\_move,~point\_up$ gestures.
  492. \item Tap tracker, supports $tap,~single\_tap,~double\_tap$ gestures.
  493. \item Transformation tracker, supports $rotate,~pinch,~drag,~flick$ gestures.
  494. \end{itemize}
  495. The implementation does not include a network protocol to support the daemon
  496. setup as described in section \ref{sec:daemon}. Therefore, it is only usable in
  497. Python programs. The two test programs are also written in Python.
  498. The event area implementations contain some geometric functions to determine
  499. whether an event should be delegated to an event area. All gesture trackers
  500. have been implemented using an imperative programming style. Technical details
  501. about the implementation of gesture detection are described in appendix
  502. \ref{app:implementation-details}.
  503. %\section{Basic usage}
  504. % TODO
  505. % example usage uit H3 hierheen halen
  506. \section{Full screen Pygame application}
  507. %The goal of this application was to experiment with the TUIO
  508. %protocol, and to discover requirements for the architecture that was to be
  509. %designed. When the architecture design was completed, the application was rewritten
  510. %using the new architecture components. The original variant is still available
  511. %in the ``experimental'' folder of the Git repository \cite{gitrepos}.
  512. An implementation of the detection of some simple multi-touch gestures (single
  513. tap, double tap, rotation, pinch and drag) using Processing\footnote{Processing
  514. is a Java-based programming environment with an export possibility for Android.
  515. See also \cite{processing}.} can be found in a forum on the Processing website
  516. \cite{processingMT}. The application has been ported to Python and adapted to
  517. receive input from the TUIO protocol. The implementation is fairly simple, but
  518. it yields some appealing results (see figure \ref{fig:draw}). In the original
  519. application, the detection logic of all gestures is combined in a single class
  520. file. As predicted by the GART article \cite{GART}, this leads to over-complex
  521. code that is difficult to read and debug.
  522. The original application code consists of two main classes. The ``multi-touch
  523. server'' starts a ``TUIO server'' that translates TUIO events to ``point
  524. \{down,move,up\}'' events. Detection of ``tap'' and ``double tap'' gestures is
  525. performed immediately after an event is received. Other gesture detection runs
  526. in a separate thread, using the following loop:
  527. \begin{verbatim}
  528. 60 times per second do:
  529. detect `single tap' based on the time since the latest `tap' gesture
  530. if points have been moved, added or removed since last iteration do:
  531. calculate the centroid of all points
  532. detect `drag' using centroid movement
  533. detect `rotation' using average orientation of all points to centroid
  534. detect `pinch' using average distance of all points to centroid
  535. \end{verbatim}
  536. There are two problems with the implementation described above. In the first
  537. place, low-level events are not grouped before gesture detection. The gesture
  538. detection uses all events for a single gesture. Therefore, only one element at
  539. a time can be rotated/resized etc. (see also section \ref{sec:areas}).
  540. Secondly, all detection code is located in the same class file. To extend the
  541. application with new gestures, a programmer must extend the code in this class
  542. file and therefore understand its structure. Since the main loop calls specific
  543. gesture detection components explicitly in a certain order, the programmer must
  544. alter the main loop to call custom gesture detection code. This is a problem
  545. because this way of extending code is not scalable over time. The class file
  546. would become more and more complex when extended with new gestures. The two
  547. problems have been solved using event areas and gesture trackers from the
  548. reference implementation. The gesture detection code has been separated into
  549. two different gesture trackers, which are the ``tap'' and ``transformation''
  550. trackers mentioned in section \ref{sec:implementation}.
  551. The positions of all touch objects and their centroid are drawn using the
  552. Pygame library. Since the Pygame library does not provide support to find the
  553. location of the display window, the root event area captures events in the
  554. entire screen surface. The application can be run either full screen or in
  555. windowed mode. If windowed, screen-wide gesture coordinates are mapped to the
  556. size of the Pyame window. In other words, the Pygame window always represents
  557. the entire touch surface. The output of the application can be seen in figure
  558. \ref{fig:draw}.
  559. \begin{figure}[h!]
  560. \center
  561. \includegraphics[scale=0.4]{data/pygame_draw.png}
  562. \caption{Output of the experimental drawing program. It draws all touch
  563. points and their centroid on the screen (the centroid is used for rotation
  564. and pinch detection). It also draws a green rectangle which responds to
  565. rotation and pinch events.}
  566. \label{fig:draw}
  567. \end{figure}
  568. \section{GTK+/Cairo application}
  569. \label{sec:testapp}
  570. The second test application uses the GIMP toolkit (GTK+) \cite{GTK} to create
  571. its user interface. Since GTK+ defines a main event loop that is started in
  572. order to use the interface, the architecture implementation runs in a separate
  573. thread.
  574. The application creates a main window, whose size and position are synchronized
  575. with the root event area of the architecture. The synchronization is handled
  576. automatically by a \texttt{GtkEventWindow} object, which is a subclass of
  577. \texttt{gtk.Window}. This object serves as a layer that connects the event area
  578. functionality of the architecture to GTK+ windows. The following Python code
  579. captures the essence of the synchronization layer:
  580. \begin{verbatim}
  581. class GtkEventWindow(Window):
  582. def __init__(self, width, height):
  583. Window.__init__(self)
  584. # Create an event area to represent the GTK window in the gesture
  585. # detection architecture
  586. self.area = RectangularArea(0, 0, width, height)
  587. # The "configure-event" signal is triggered by GTK when the position or
  588. # size of the window are updated
  589. self.connect("configure-event", self.sync_area)
  590. def sync_area(self, win, event):
  591. # Synchronize the position and size of the event area with that of the
  592. # GTK window
  593. self.area.width = event.width
  594. self.area.height = event.height
  595. self.area.set_position(*event.get_coords())
  596. \end{verbatim}
  597. The application window contains a number of polygons which can be dragged,
  598. resized and rotated. Each polygon is represented by a separate event area to
  599. allow simultaneous interaction with different polygons. The main window also
  600. responds to transformation, by transforming all polygons. Additionally, double
  601. tapping on a polygon changes its color.
  602. An ``overlay'' event area is used to detect all fingers currently touching the
  603. screen. The application defines a custom gesture tracker, called the ``hand
  604. tracker'', which is used by the overlay. The hand tracker uses distances
  605. between detected fingers to detect which fingers belong to the same hand. The
  606. application draws a line from each finger to the hand it belongs to, as visible
  607. in figure \ref{fig:testapp}.
  608. Note that however it is a full screen event area, the overlay is not used as
  609. the root of the event area tree. Instead, the overlay is the right sibling of
  610. the application window area in the tree. This is needed, because the
  611. application window and its children stop the propagation of events to the root
  612. event area. The overlay area needs all events to keep its hand tracker
  613. up-to-date. Therefore, the tree is arranged in such a way that the overlay
  614. event area handles an event first, before the application window can stop its
  615. propagation. The event area implementation delegates events to its children in
  616. right-to left order, because area's that are added to the tree later are
  617. assumed to be positioned over their previously added siblings.
  618. \begin{figure}[h!]
  619. \center
  620. \includegraphics[scale=0.35]{data/testapp.png}
  621. \caption{
  622. Screenshot of the second test application. Two polygons can be dragged,
  623. rotated and scaled. Separate groups of fingers are recognized as hands,
  624. each hand is drawn as a centroid with a line to each finger.
  625. }
  626. \label{fig:testapp}
  627. \end{figure}
  628. To manage the propagation of events used for transformations and tapping, the
  629. applications arranges its event areas in a tree structure as described in
  630. section \ref{sec:tree}. Each transformable event area has its own
  631. ``transformation tracker'', which stops the propagation of events used for
  632. transformation gestures. Because the propagation of these events is stopped,
  633. overlapping polygons do not cause a problem. Figure \ref{fig:testappdiagram}
  634. shows the tree structure used by the application.
  635. Note that the overlay event area, though covering the whole screen surface, is
  636. not the root event area. The overlay event area is placed on top of the
  637. application window (being a rightmost sibling of the application window event
  638. area in the tree). This is necessary, because the transformation trackers stop
  639. event propagation. The hand tracker needs to capture all events to be able to
  640. give an accurate representations of all fingers touching the screen Therefore,
  641. the overlay should delegate events to the hand tracker before they are stopped
  642. by a transformation tracker. Placing the overlay over the application window
  643. forces the screen event area to delegate events to the overlay event area
  644. first.
  645. \testappdiagram
  646. \section{Conclusion}
  647. \emph{TODO: Tekortkomingen aangeven die naar voren komen uit de tests}
  648. % Verschillende apparaten/drivers geven een ander soort primitieve events af.
  649. % Een vertaling van deze device-specifieke events naar een algemeen formaat van
  650. % events is nodig om gesture detection op een generieke manier te doen.
  651. % Door input van meerdere drivers door dezelfde event driver heen te laten gaan
  652. % is er ondersteuning voor meerdere apparaten tegelijkertijd.
  653. % Event driver levert low-level events. niet elke event hoort bij elke gesture,
  654. % dus moet er een filtering plaatsvinden van welke events bij welke gesture
  655. % horen. Areas geven de mogelijkheid hiervoor op apparaten waarvan het
  656. % filteren locatiegebonden is.
  657. % Het opsplitsten van gesture detection voor gesture trackers is een manier om
  658. % flexibel te zijn in ondersteunde types detection logic, en het beheersbaar
  659. % houden van complexiteit.
  660. \chapter{Suggestions for future work}
  661. \label{chapter:futurework}
  662. \section{A generic method for grouping events}
  663. \label{sec:eventfilter}
  664. As mentioned in section \ref{sec:areas}, the concept of an event area is based
  665. on the assumption that the set of originating events that form a particular
  666. gesture, can be determined based exclusively on the location of the events.
  667. Since this thesis focuses on multi-touch surface based devices, and every
  668. object on a multi-touch surface has a position, this assumption is valid.
  669. However, the design of the architecture is meant to be more generic; to provide
  670. a structured design for managing gesture detection.
  671. An in-air gesture detection device, such as the Microsoft Kinect \cite{kinect},
  672. provides 3D positions. Some multi-touch tables work with a camera that can also
  673. determine the shape and rotational orientation of objects touching the surface.
  674. For these devices, events delegated by the event driver have more parameters
  675. than a 2D position alone. The term ``area'' is not suitable to describe a group
  676. of events that consist of these parameters.
  677. A more generic term for a component that groups similar events is the
  678. \emph{event filter}. The concept of an event filter is based on the same
  679. principle as event areas, which is the assumption that gestures are formed from
  680. a subset of all events. However, an event filter takes all parameters of an
  681. event into account. An application on the camera-based multi-touch table could
  682. be to group all objects that are triangular into one filter, and all
  683. rectangular objects into another. Or, to separate small finger tips from large
  684. ones to be able to recognize whether a child or an adult touches the table.
  685. \section{Using a state machine for gesture detection}
  686. All gesture trackers in the reference implementation are based on the explicit
  687. analysis of events. Gesture detection is a widely researched subject, and the
  688. separation of detection logic into different trackers allows for multiple types
  689. of gesture detection in the same architecture. An interesting question is
  690. whether multi-touch gestures can be described in a formal way so that explicit
  691. detection code can be avoided.
  692. \cite{GART} and \cite{conf/gw/RigollKE97} propose the use of machine learning
  693. to recognize gestures. To use machine learning, a set of input events forming a
  694. particular gesture must be represented as a feature vector. A learning set
  695. containing a set of feature vectors that represent some gesture ``teaches'' the
  696. machine what the feature of the gesture looks like.
  697. An advantage of using explicit gesture detection code is the fact that it
  698. provides a flexible way to specify the characteristics of a gesture, whereas
  699. the performance of feature vector-based machine learning is dependent on the
  700. quality of the learning set.
  701. A better method to describe a gesture might be to specify its features as a
  702. ``signature''. The parameters of such a signature must be be based on input
  703. events. When a set of input events matches the signature of some gesture, the
  704. gesture is be triggered. A gesture signature should be a complete description
  705. of all requirements the set of events must meet to form the gesture.
  706. A way to describe signatures on a multi-touch surface can be by the use of a
  707. state machine of its touch objects. The states of a simple touch point could be
  708. ${down, move, up, hold}$ to indicate respectively that a point is put down, is
  709. being moved, is held on a position for some time, and is released. In this
  710. case, a ``drag'' gesture can be described by the sequence $down - move - up$
  711. and a ``select'' gesture by the sequence $down - hold$. If the set of states is
  712. not sufficient to describe a desired gesture, a developer can add additional
  713. states. For example, to be able to make a distinction between an element being
  714. ``dragged'' or ``thrown'' in some direction on the screen, two additional
  715. states can be added: ${start, stop}$ to indicate that a point starts and stops
  716. moving. The resulting state transitions are sequences $down - start - move -
  717. stop - up$ and $down - start - move - up$ (the latter does not include a $stop$
  718. to indicate that the element must keep moving after the gesture had been
  719. performed).
  720. An additional way to describe even more complex gestures is to use other
  721. gestures in a signature. An example is to combine $select - drag$ to specify
  722. that an element must be selected before it can be dragged.
  723. The application of a state machine to describe multi-touch gestures is an
  724. subject well worth exploring in the future.
  725. \section{Daemon implementation}
  726. Section \ref{sec:daemon} proposes the use of a network protocol to communicate
  727. between an architecture implementation and (multiple) gesture-based
  728. applications, as illustrated in figure \ref{fig:daemon}. The reference
  729. implementation does not support network communication. If the architecture
  730. design is to become successful in the future, the implementation of network
  731. communication is a must. ZeroMQ (or $\emptyset$MQ) \cite{ZeroMQ} is a
  732. high-performance software library with support for a wide range of programming
  733. languages. A good basis for a future implementation could use this library as
  734. the basis for its communication layer.
  735. If an implementation of the architecture will be released, a good idea would be
  736. to do so within a community of application developers. A community can
  737. contribute to a central database of gesture trackers, making the interaction
  738. from their applications available for use in other applications.
  739. Ideally, a user can install a daemon process containing the architecture so
  740. that it is usable for any gesture-based application on the device. Applications
  741. that use the architecture can specify it as being a software dependency, or
  742. include it in a software distribution.
  743. \bibliographystyle{plain}
  744. \bibliography{report}{}
  745. \appendix
  746. \chapter{The TUIO protocol}
  747. \label{app:tuio}
  748. The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible
  749. objects, such as fingers or objects on a multi-touch table. Object information
  750. is sent to the TUIO UDP port (3333 by default).
  751. For efficiency reasons, the TUIO protocol is encoded using the Open Sound
  752. Control \cite[OSC]{OSC} format. An OSC server/client implementation is
  753. available for Python: pyOSC \cite{pyOSC}.
  754. A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}.
  755. However, the execution of an example script yields an error regarding Python's
  756. built-in \texttt{socket} library. Therefore, the reference implementation uses
  757. the pyOSC package to receive TUIO messages.
  758. The two most important message types of the protocol are ALIVE and SET
  759. messages. An ALIVE message contains the list of session id's that are currently
  760. ``active'', which in the case of multi-touch a table means that they are
  761. touching the screen. A SET message provides geometric information of a session
  762. id, such as position, velocity and acceleration.
  763. Each session id represents an object. The only type of objects on the
  764. multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y)
  765. position on the screen.
  766. ALIVE messages can be used to determine when an object touches and releases the
  767. screen. For example, if a session id was in the previous message but not in the
  768. current, The object it represents has been lifted from the screen.
  769. SET provide information about movement. In the case of simple (x, y) positions,
  770. only the movement vector of the position itself can be calculated. For more
  771. complex objects such as fiducials, arguments like rotational position and
  772. acceleration are also included.
  773. ALIVE and SET messages can be combined to create ``point down'', ``point move''
  774. and ``point up'' events.
  775. TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left
  776. top corner of the screen and $(1.0, 1.0)$ the right bottom corner. To focus
  777. events within a window, a translation to window coordinates is required in the
  778. client application, as stated by the online specification
  779. \cite{TUIO_specification}:
  780. \begin{quote}
  781. In order to compute the X and Y coordinates for the 2D profiles a TUIO
  782. tracker implementation needs to divide these values by the actual sensor
  783. dimension, while a TUIO client implementation consequently can scale these
  784. values back to the actual screen dimension.
  785. \end{quote}
  786. \chapter{Gesture detection in the reference implementation}
  787. \label{app:implementation-details}
  788. Both rotation and pinch use the centroid of all touch points. A \emph{rotation}
  789. gesture uses the difference in angle relative to the centroid of all touch
  790. points, and \emph{pinch} uses the difference in distance. Both values are
  791. normalized using division by the number of touch points. A pinch event contains
  792. a scale factor, and therefore uses a division of the current by the previous
  793. average distance to the centroid.
  794. % TODO
  795. \emph{TODO: rotatie en pinch gaan iets anders/uitgebreider worden beschreven.}
  796. \end{document}