report.tex 58 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083
  1. \documentclass[twoside,openright]{uva-bachelor-thesis}
  2. \usepackage[english]{babel}
  3. \usepackage[utf8]{inputenc}
  4. \usepackage{hyperref,graphicx,tikz,subfigure,float}
  5. % Link colors
  6. \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen}
  7. % Title Page
  8. \title{A generic architecture for gesture-based interaction}
  9. \author{Taddeüs Kroes}
  10. \supervisors{Dr. Robert G. Belleman (UvA)}
  11. \signedby{Dr. Robert G. Belleman (UvA)}
  12. \begin{document}
  13. % Title page
  14. \maketitle
  15. \begin{abstract}
  16. Applications that use complex gesture-based interaction need to translate
  17. primitive messages from low-level device drivers to complex, high-level
  18. gestures, and map these gestures to elements in an application. This report
  19. presents a generic architecture for the detection of complex gestures in an
  20. application. The architecture translates device driver messages to a common
  21. set of ``events''. The events are then delegated to a tree of ``event
  22. areas'', which are used to separate groups of events and assign these
  23. groups to an element in the application. Gesture detection is performed on
  24. a group of events assigned to an event area, using detection units called
  25. ``gesture tackers''. An implementation of the architecture as a daemon
  26. process would be capable of serving gestures to multiple applications at
  27. the same time. A reference implementation and two test case applications
  28. have been created to test the effectiveness of the architecture design.
  29. \end{abstract}
  30. % Set paragraph indentation
  31. \parindent 0pt
  32. \parskip 1.5ex plus 0.5ex minus 0.2ex
  33. % Table of content on separate page
  34. \tableofcontents
  35. \chapter{Introduction}
  36. \label{chapter:introduction}
  37. Surface-touch devices have evolved from pen-based tablets to single-touch
  38. trackpads, to multi-touch devices like smartphones and tablets. Multi-touch
  39. devices enable a user to interact with software using hand gestures, making the
  40. interaction more expressive and intuitive. These gestures are more complex than
  41. primitive ``click'' or ``tap'' events that are used by single-touch devices.
  42. Some examples of more complex gestures are ``pinch''\footnote{A ``pinch''
  43. gesture is formed by performing a pinching movement with multiple fingers on a
  44. multi-touch surface. Pinch gestures are often used to zoom in or out on an
  45. object.} and ``flick''\footnote{A ``flick'' gesture is the act of grabbing an
  46. object and throwing it in a direction on a touch surface, giving it momentum to
  47. move for some time after the hand releases the surface.} gestures.
  48. The complexity of gestures is not limited to navigation in smartphones. Some
  49. multi-touch devices are already capable of recognizing objects touching the
  50. screen \cite[Microsoft Surface]{mssurface}. In the near future, touch screens
  51. will possibly be extended or even replaced with in-air interaction (Microsoft's
  52. Kinect \cite{kinect} and the Leap \cite{leap}).
  53. The interaction devices mentioned above generate primitive events. In the case
  54. of surface-touch devices, these are \emph{down}, \emph{move} and \emph{up}
  55. events. Application programmers who want to incorporate complex, intuitive
  56. gestures in their application face the challenge of interpreting these
  57. primitive events as gestures. With the increasing complexity of gestures, the
  58. complexity of the logic required to detect these gestures increases as well.
  59. This challenge limits, or even deters the application developer to use complex
  60. gestures in an application.
  61. The main question in this research project is whether a generic architecture
  62. for the detection of complex interaction gestures can be designed, with the
  63. capability of managing the complexity of gesture detection logic. The ultimate
  64. goal would be to create an implementation of this architecture that can be
  65. extended to support a wide range of complex gestures. With the existence of
  66. such an implementation, application developers do not need to reinvent gesture
  67. detection for every new gesture-based application.
  68. \section{Contents of this document}
  69. The scope of this thesis is limited to the detection of gestures on
  70. multi-touch surface devices. It presents a design for a generic gesture
  71. detection architecture for use in multi-touch based applications. A
  72. reference implementation of this design is used in some test case
  73. applications, whose purpose is to test the effectiveness of the design and
  74. detect its shortcomings.
  75. Chapter \ref{chapter:related} describes related work that inspired the
  76. design of the architecture. The design is described in chapter
  77. \ref{chapter:design}. Chapter \ref{chapter:implementation} presents a
  78. reference implementation of the architecture. Two test case applications
  79. show the practical use of the architecture components in chapter
  80. \ref{chapter:test-applications}. Chapter \ref{chapter:conclusions}
  81. formulates some conclusions about the architecture design and its
  82. practicality. Finally, some suggestions for future research on the subject
  83. are given in chapter \ref{chapter:futurework}.
  84. \chapter{Related work}
  85. \label{chapter:related}
  86. Applications that use gesture-based interaction need a graphical user
  87. interface (GUI) on which gestures can be performed. The creation of a GUI
  88. is a platform-specific task. For instance, Windows and Linux support
  89. different window managers. To create a window in a platform-independent
  90. application, the application would need to include separate functionalities
  91. for supported platforms. For this reason, GUI-based applications are often
  92. built on top of an application framework that abstracts platform-specific
  93. tasks. Frameworks often include a set of tools and events that help the
  94. developer to easily build advanced GUI widgets.
  95. % Existing frameworks (and why they're not good enough)
  96. Some frameworks, such as Nokia's Qt \cite{qt}, provide support for basic
  97. multi-touch gestures like tapping, rotation or pinching. However, the
  98. detection of gestures is embedded in the framework code in an inseparable
  99. way. Consequently, an application developer who wants to use multi-touch
  100. interaction in an application, is forced to use an application framework
  101. that includes support for those multi-touch gestures that are required by
  102. the application. Kivy \cite{kivy} is a GUI framework for Python
  103. applications, with support for multi-touch gestures. It uses a basic
  104. gesture detection algorithm that allows developers to define custom
  105. gestures to some degree \cite{kivygesture} using a set of touch point
  106. coordinates. However, these frameworks do not provide support for extension
  107. with custom complex gestures.
  108. Many frameworks are also device-specific, meaning that they are developed
  109. for use on either a tablet, smartphone, PC or other device. OpenNI
  110. \cite{OpenNI2010}, for example, provides API's for only natural interaction
  111. (NI) devices such as webcams and microphones. The concept of complex
  112. gesture-based interaction, however, is applicable to a much wider set of
  113. devices. VRPN \cite{VRPN} provides a software library that abstracts the
  114. output of devices, which enables it to support a wide set of devices used
  115. in Virtual Reality (VR) interaction. The framework makes the low-level
  116. events of these devices accessible in a client application using network
  117. communication. Gesture detection is not included in VRPN.
  118. % Methods of gesture detection
  119. The detection of high-level gestures from low-level events can be
  120. approached in several ways. GART \cite{GART} is a toolkit for the
  121. development of gesture-based applications, which states that the best way
  122. to classify gestures is to use machine learning. The programmer trains an
  123. application to recognize gestures using a machine learning library from the
  124. toolkit. Though multi-touch input is not directly supported by the toolkit,
  125. the level of abstraction does allow for it to be implemented in the form of
  126. a ``touch'' sensor. The reason to use machine learning is that gesture
  127. detection ``is likely to become increasingly complex and unmanageable''
  128. when using a predefined set of rules to detect whether some sensor input
  129. can be classified as a specific gesture.
  130. The alternative to machine learning is to define a predefined set of rules
  131. for each gesture. Manoj Kumar \cite{win7touch} presents a Windows 7
  132. application, written in Microsofts .NET, which detects a set of basic
  133. directional gestures based on the movement of a stylus. The complexity of
  134. the code is managed by the separation of different gesture types in
  135. different detection units called ``gesture trackers''. The application
  136. shows that predefined gesture detection rules do not necessarily produce
  137. unmanageable code.
  138. \section{Analysis of related work}
  139. Implementations for the support of complex gesture based interaction do
  140. already exist. However, gesture detection in these implementations is
  141. device-specific (Nokia Qt and OpenNI) or limited to use within an
  142. application framework (Kivy).
  143. An abstraction of device output allows VRPN and GART to support multiple
  144. devices. However, VRPN does not incorporate gesture detection. GART does,
  145. but only in the form of machine learning algorithms. Many applications for
  146. mobile phones and tablets only use simple gestures such as taps. For this
  147. category of applications, machine learning is an excessively complex method
  148. of gesture detection. Manoj Kumar shows that if managed well, a predefined
  149. set of gesture detection rules is sufficient to detect simple gestures.
  150. This thesis explores the possibility to create an architecture that
  151. combines support for multiple input devices with different methods of
  152. gesture detection.
  153. \chapter{Design}
  154. \label{chapter:design}
  155. % Diagrams are defined in a separate file
  156. \input{data/diagrams}
  157. \section{Introduction}
  158. Application frameworks are a necessity when it comes to fast,
  159. cross-platform development. A generic architecture design should aim to be
  160. compatible with existing frameworks, and provide a way to detect and extend
  161. gestures independent of the framework. Since an application framework is
  162. written in a specific programming language, the architecture should be
  163. accessible for applications using a language-independent method of
  164. communication. This intention leads towards the concept of a dedicated
  165. gesture detection application that serves gestures to multiple applications
  166. at the same time.
  167. This chapter describes a design for such an architecture. The architecture
  168. components are shown by figure \ref{fig:fulldiagram}. Sections
  169. \ref{sec:multipledrivers} to \ref{sec:daemon} explain the use of all
  170. components in detail.
  171. \fulldiagram
  172. \newpage
  173. \section{Supporting multiple drivers}
  174. \label{sec:multipledrivers}
  175. The TUIO protocol \cite{TUIO} is an example of a driver that can be used by
  176. multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate
  177. low-level touch events (section \ref{sec:tuio} describes these in more
  178. detail). These messages are specific to the API of the TUIO protocol.
  179. Other drivers may use different messages types. To support more than one
  180. driver in the architecture, there must be some translation from
  181. device-specific messages to a common format for primitive touch events.
  182. After all, the gesture detection logic in a ``generic'' architecture should
  183. not be implemented based on device-specific messages. The event types in
  184. this format should be chosen so that multiple drivers can trigger the same
  185. events. If each supported driver would add its own set of event types to
  186. the common format, the purpose of it being ``common'' would be defeated.
  187. A minimal expectation for a touch device driver is that it detects simple
  188. touch points, with a ``point'' being an object at an $(x, y)$ position on
  189. the touch surface. This yields a basic set of events: $\{point\_down,
  190. point\_move, point\_up\}$.
  191. The TUIO protocol supports fiducials\footnote{A fiducial is a pattern used
  192. by some touch devices to identify objects.}, which also have a rotational
  193. property. This results in a more extended set: $\{point\_down, point\_move,
  194. point\_up, object\_down, object\_move, object\_up,\\ object\_rotate\}$.
  195. Due to their generic nature, the use of these events is not limited to the
  196. TUIO protocol. Another driver that can keep apart rotated objects from
  197. simple touch points could also trigger them.
  198. The component that translates device-specific messages to common events, is
  199. called the \emph{event driver}. The event driver runs in a loop, receiving
  200. and analyzing driver messages. When a sequence of messages is analyzed as
  201. an event, the event driver delegates the event to other components in the
  202. architecture for translation to gestures.
  203. Support for a touch driver can be added by adding an event driver
  204. implementation. The choice of event driver implementation that is used in an
  205. application is dependent on the driver support of the touch device being
  206. used.
  207. Because event driver implementations have a common output format in the
  208. form of events, multiple event drivers can be used at the same time (see
  209. figure \ref{fig:multipledrivers}). This design feature allows low-level
  210. events from multiple devices to be aggregated into high-level gestures.
  211. \multipledriversdiagram
  212. \section{Event areas: connecting gesture events to widgets}
  213. \label{sec:areas}
  214. Touch input devices are unaware of the graphical input
  215. widgets\footnote{``Widget'' is a name commonly used to identify an element
  216. of a graphical user interface (GUI).} rendered by an application, and
  217. therefore generate events that simply identify the screen location at which
  218. an event takes place. User interfaces of applications that do not run in
  219. full screen modus are contained in a window. Events which occur outside the
  220. application window should not be handled by the application in most cases.
  221. What's more, a widget within the application window itself should be able
  222. to respond to different gestures. E.g. a button widget may respond to a
  223. ``tap'' gesture to be activated, whereas the application window responds to
  224. a ``pinch'' gesture to be resized. In order to restrict the occurrence of a
  225. gesture to a particular widget in an application, the events used for the
  226. gesture must be restricted to the area of the screen covered by that
  227. widget. An important question is if the architecture should offer a
  228. solution to this problem, or leave the task of assigning gestures to
  229. application widgets to the application developer.
  230. If the architecture does not provide a solution, the ``gesture detection''
  231. component in figure \ref{fig:fulldiagram} receives all events that occur on
  232. the screen surface. The gesture detection logic thus uses all events as
  233. input to detect a gesture. This leaves no possibility for a gesture to
  234. occur at multiple screen positions at the same time. The problem is
  235. illustrated by figure \ref{fig:ex1}, where two widgets on the screen can be
  236. rotated independently. The rotation detection component that detects
  237. rotation gestures receives events from all four fingers as input. If the
  238. two groups of events are not separated by clustering them based on the area
  239. in which they are placed, only one rotation event will occur.
  240. \examplefigureone
  241. A gesture detection component could perform a heuristic way of clustering
  242. based on the distance between events. However, this method cannot guarantee
  243. that a cluster of events corresponds to a particular application widget.
  244. In short, a gesture detection component is difficult to implement without
  245. awareness of the location of application widgets. Secondly, the
  246. application developer still needs to direct gestures to a particular widget
  247. manually. This requires geometric calculations in the application logic,
  248. which is a tedious and error-prone task for the developer.
  249. The architecture described here groups events that occur inside the area
  250. covered by a widget, before passing them on to a gesture detection
  251. component. Different gesture detection components can then detect gestures
  252. simultaneously, based on different sets of input events. An area of the
  253. screen surface is represented by an \emph{event area}. An event area
  254. filters input events based on their location, and then delegates events to
  255. gesture detection components that are assigned to the event area. Events
  256. which are located outside the event area are not delegated to its gesture
  257. detection components.
  258. In the example of figure \ref{fig:ex1}, the two rotatable widgets can be
  259. represented by two event areas, each having a different rotation detection
  260. component. Each event area can consist of four corner locations of the
  261. square it represents. To detect whether an event is located inside a
  262. square, the event areas can use a point-in-polygon (PIP) test \cite{PIP}.
  263. It is the task of the client application to synchronize the corner
  264. locations of the event area with those of the widget.
  265. \subsection{Callback mechanism}
  266. When a gesture is detected by a gesture detection component, it must be
  267. handled by the client application. A common way to handle events in an
  268. application is a ``callback'' mechanism: the application developer binds a
  269. function to an event, that is called when the event occurs. Because of the
  270. familiarity of this concept with developers, the architecture uses a
  271. callback mechanism to handle gestures in an application. Callback handlers
  272. are bound to event areas, since event areas control the grouping of events
  273. and thus the occurrence of gestures in an area of the screen.
  274. \subsection{Area tree}
  275. \label{sec:tree}
  276. A basic data structure of event areas in the architecture would be a list
  277. of event areas. When the event driver delegates an event, it is accepted by
  278. each event area that contains the event coordinates.
  279. If the architecture were to be used in combination with an application
  280. framework, each widget that responds to gestures should have a mirroring
  281. event area that synchronizes its location with that of the widget. Consider
  282. a panel with five buttons that all listen to a ``tap'' event. If the
  283. location of the panel changes as a result of movement of the application
  284. window, the positions of all buttons have to be updated too.
  285. This process is simplified by the arrangement of event areas in a tree
  286. structure. A root event area represents the panel, containing five other
  287. event areas which are positioned relative to the root area. The relative
  288. positions do not need to be updated when the panel area changes its
  289. position. GUI toolkits use this kind of tree structure to manage graphical
  290. widgets.
  291. If the GUI toolkit provides an API for requesting the position and size of
  292. a widget, a recommended first step when developing an application is to
  293. create a subclass of the area that automatically synchronizes with the
  294. position of a widget from the GUI framework. For example, the test
  295. application described in section \ref{sec:testapp} extends the GTK+
  296. \cite{GTK} application window widget with the functionality of a
  297. rectangular event area, to direct touch events to an application window.
  298. \subsection{Event propagation}
  299. \label{sec:eventpropagation}
  300. Another problem occurs when event areas overlap, as shown by figure
  301. \ref{fig:eventpropagation}. When the white square is dragged, the gray
  302. square should stay at its current position. This means that events that are
  303. used for dragging of the white square, should not be used for dragging of
  304. the gray square. The use of event areas alone does not provide a solution
  305. here, since both the gray and the white event area accept an event that
  306. occurs within the white square.
  307. The problem described above is a common problem in GUI applications, and
  308. there is a common solution (used by GTK+ \cite{gtkeventpropagation}, among
  309. others). An event is passed to an ``event handler''. If the handler returns
  310. \texttt{true}, the event is considered ``handled'' and is not
  311. ``propagated'' to other widgets. Applied to the example of the draggable
  312. squares, the rotation detection component of the white square should stop
  313. the propagation of events to the event area of the gray square.
  314. In the example, rotation of the white square has priority over rotation of
  315. the gray square because the white area is the widget actually being touched
  316. at the screen surface. In general, events should be delegated to event
  317. areas according to the order in which the event areas are positioned over
  318. each other. The tree structure in which event areas are arranged, is an
  319. ideal tool to determine the order in which an event is delegated. An
  320. object touching the screen is essentially touching the deepest event area
  321. in the tree that contains the triggered event, which must be the first to
  322. receive the event. When the gesture trackers of the event area are
  323. finished with the event, it is propagated to the parent and siblings in the
  324. event area tree. Optionally, a gesture tracker can stop the propagation of
  325. the event by its corresponding event area. Figure
  326. \ref{fig:eventpropagation} demonstrates event propagation in the example of
  327. the draggable squares.
  328. \eventpropagationfigure
  329. An additional type of event propagation is ``immediate propagation'', which
  330. indicates propagation of an event from one gesture tracker to another. This
  331. is applicable when an event area uses more than one gesture tracker. When
  332. regular propagation is stopped, the event is propagated to other gesture
  333. trackers first, before actually being stopped. One of the gesture trackers
  334. can also stop the immediate propagation of an event, so that the event is
  335. not passed to the next gesture tracker, nor to the ancestors of the event
  336. area.
  337. The concept of an event area is based on the assumption that the set of
  338. originating events that form a particular gesture, can be determined
  339. exclusively based on the location of the events. This is a reasonable
  340. assumption for simple touch objects whose only parameter is a position,
  341. such as a pen or a human finger. However, more complex touch objects can
  342. have additional parameters, such as rotational orientation or color. An
  343. even more generic concept is the \emph{event filter}, which detects whether
  344. an event should be assigned to a particular gesture detection component
  345. based on all available parameters. This level of abstraction provides
  346. additional methods of interaction. For example, a camera-based multi-touch
  347. surface could make a distinction between gestures performed with a blue
  348. gloved hand, and gestures performed with a green gloved hand.
  349. As mentioned in the introduction (chapter \ref{chapter:introduction}), the
  350. scope of this thesis is limited to multi-touch surface based devices, for
  351. which the \emph{event area} concept suffices. Section \ref{sec:eventfilter}
  352. explores the possibility of event areas to be replaced with event filters.
  353. \section{Detecting gestures from low-level events}
  354. \label{sec:gesture-detection}
  355. The low-level events that are grouped by an event area must be translated
  356. to high-level gestures in some way. Simple gestures, such as a tap or the
  357. dragging of an element using one finger, are easy to detect by comparing
  358. the positions of sequential $point\_down$ and $point\_move$ events. More
  359. complex gestures, like the writing of a character from the alphabet,
  360. require more advanced detection algorithms.
  361. Sequences of events that are triggered by a multi-touch based surfaces are
  362. often of a manageable complexity. An imperative programming style is
  363. sufficient to detect many common gestures, like rotation and dragging. The
  364. imperative programming style is also familiar and understandable for a wide
  365. range of application developers. Therefore, the architecture should support
  366. this style of gesture detection. A problem with an imperative programming
  367. style is that the explicit detection of different gestures requires
  368. different gesture detection components. If these components are not managed
  369. well, the detection logic is prone to become chaotic and over-complex.
  370. A way to detect more complex gestures based on a sequence of input events,
  371. is with the use of machine learning methods, such as the Hidden Markov
  372. Models (HMM)\footnote{A Hidden Markov Model (HMM) is a statistical model
  373. without a memory, it can be used to detect gestures based on the current
  374. input state alone.} used for sign language detection by Gerhard Rigoll et
  375. al. \cite{conf/gw/RigollKE97}. A sequence of input states can be mapped to
  376. a feature vector that is recognized as a particular gesture with a certain
  377. probability. An advantage of using machine learning with respect to an
  378. imperative programming style, is that complex gestures are described
  379. without the use of explicit detection logic, thus reducing code complexity.
  380. For example, the detection of the character `A' being written on the screen
  381. is difficult to implement using explicit detection code, whereas a trained
  382. machine learning system can produce a match with relative ease.
  383. To manage complexity and support multiple styles of gesture detection
  384. logic, the architecture has adopted the tracker-based design as described
  385. by Manoj Kumar \cite{win7touch}. Different detection components are wrapped
  386. in separate gesture tracking units called \emph{gesture trackers}. The
  387. input of a gesture tracker is provided by an event area in the form of
  388. events. Each gesture detection component is wrapped in a gesture tracker
  389. with a fixed type of input and output. Internally, the gesture tracker can
  390. adopt any programming style. A character recognition component can use an
  391. HMM, whereas a tap detection component defines a simple function that
  392. compares event coordinates.
  393. When a gesture tracker detects a gesture, this gesture is triggered in the
  394. corresponding event area. The event area then calls the callback functions
  395. that are bound to the gesture type by the application.
  396. The use of gesture trackers as small detection units allows extension of
  397. the architecture. A developer can write a custom gesture tracker and
  398. register it in the architecture. The tracker can use any type of detection
  399. logic internally, as long as it translates low-level events to high-level
  400. gestures.
  401. An example of a possible gesture tracker implementation is a
  402. ``transformation tracker'' that detects rotation, scaling and translation
  403. gestures.
  404. \section{Serving multiple applications}
  405. \label{sec:daemon}
  406. The design of the architecture is essentially complete with the components
  407. specified in this chapter. However, one specification has not yet been
  408. discussed: the ability to address the architecture using a method of
  409. communication independent of the application's programming language.
  410. If the architecture and a gesture-based application are written in the same
  411. language, the main loop of the architecture can run in a separate thread of
  412. the application. If the application is written in a different language, the
  413. architecture has to run in a separate process. Since the application needs
  414. to respond to gestures that are triggered by the architecture, there must
  415. be a communication layer between the separate processes.
  416. A common and efficient way of communication between two separate processes
  417. is through the use of a network protocol. The architecture could run as a
  418. daemon\footnote{``daemon'' is a name Unix uses to indicate that a process
  419. runs as a background process.} process, listening to driver messages and
  420. triggering gestures in registered applications.
  421. \daemondiagram
  422. An advantage of a daemon setup is that it can serve multiple applications
  423. at the same time. Alternatively, each application that uses gesture
  424. interaction would start its own instance of the architecture in a separate
  425. process, which would be less efficient. The network communication layer
  426. also allows the architecture and a client application to run on separate
  427. machines, thus distributing computational load. The other machine may even
  428. use a different operating system.
  429. \chapter{Reference implementation}
  430. \label{chapter:implementation}
  431. A reference implementation of the design has been written in Python and is
  432. available at \cite{gitrepos}. The implementation does not include a network
  433. protocol to support the daemon setup as described in section \ref{sec:daemon}.
  434. Therefore, it is only usable in Python programs. The two test applications
  435. described in chapter \ref{chapter:test-applications} are also written in
  436. Python.
  437. To test multi-touch interaction properly, a multi-touch device is required. The
  438. University of Amsterdam (UvA) has provided access to a multi-touch table from
  439. PQlabs. The table uses the TUIO protocol \cite{TUIO} to communicate touch
  440. events.
  441. The following component implementations are included in the implementation:
  442. \textbf{Event drivers}
  443. \begin{itemize}
  444. \item TUIO driver, using only the support for simple touch points with an
  445. $(x, y)$ position.
  446. \end{itemize}
  447. \textbf{Event areas}
  448. \begin{itemize}
  449. \item Circular area
  450. \item Rectangular area
  451. \item Polygon area
  452. \item Full screen area
  453. \end{itemize}
  454. \textbf{Gesture trackers}
  455. \begin{itemize}
  456. \item Basic tracker, supports $point\_down,~point\_move,~point\_up$ gestures.
  457. \item Tap tracker, supports $tap,~single\_tap,~double\_tap$ gestures.
  458. \item Transformation tracker, supports $rotate,~pinch,~drag,~flick$ gestures.
  459. \end{itemize}
  460. The implementation of the TUIO event driver is described in section
  461. \ref{sec:tuio}.
  462. The reference implementation also contains some geometric functions that are
  463. used by several event area implementations. The event area implementations are
  464. trivial by name and are therefore not discussed in this report.
  465. All gesture trackers have been implemented using an imperative programming
  466. style. Section \ref{sec:tracker-registration} shows how gesture trackers can be
  467. added to the architecture. Sections \ref{sec:basictracker} to
  468. \ref{sec:transformationtracker} describe the gesture tracker implementations in
  469. detail.
  470. \section{The TUIO event driver}
  471. \label{sec:tuio}
  472. The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible
  473. objects, such as fingers or objects on a multi-touch table. Object information
  474. is sent to the TUIO UDP port (3333 by default). For efficiency reasons, the
  475. TUIO protocol is encoded using the Open Sound Control \cite[OSC]{OSC} format.
  476. An OSC server/client implementation is available for Python: pyOSC
  477. \cite{pyOSC}.
  478. A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}.
  479. However, a bug causes the execution of an example script to yield an error in
  480. Python's built-in \texttt{socket} library. Therefore, the TUIO event driver
  481. receives TUIO messages at a lower level, using the pyOSC package to receive
  482. TUIO messages.
  483. The two most important message types of the protocol are ALIVE and SET
  484. messages. An ALIVE message contains the list of ``session'' id's that are
  485. currently ``active'', which in the case of multi-touch a table means that they
  486. are touching the touch surface. A SET message provides geometric information of
  487. a session, such as position, velocity and acceleration. Each session represents
  488. an object touching the touch surface. The only type of objects on the
  489. multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y)
  490. position on the touch surface.
  491. ALIVE messages can be used to determine when an object touches and releases the
  492. screen. E.g. if a session id was in the previous message but not in the
  493. current, the object it represents has been lifted from the screen. SET messages
  494. provide information about movement. In the case of simple (x, y) positions,
  495. only the movement vector of the position itself can be calculated. For more
  496. complex objects such as fiducials, arguments like rotational position and
  497. acceleration are also included. ALIVE and SET messages are combined to create
  498. \emph{point\_down}, \emph{point\_move} and \emph{point\_up} events by the TUIO
  499. event driver.
  500. TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left
  501. top corner of the touch surface and $(1.0, 1.0)$ the right bottom corner. The
  502. TUIO event driver scales these to pixel coordinates so that event area
  503. implementations can use pixel coordinates to determine whether an event is
  504. located within them. This transformation is also mentioned by the online
  505. TUIO specification \cite{TUIO_specification}:
  506. \begin{quote}
  507. In order to compute the X and Y coordinates for the 2D profiles a TUIO
  508. tracker implementation needs to divide these values by the actual sensor
  509. dimension, while a TUIO client implementation consequently can scale these
  510. values back to the actual screen dimension.
  511. \end{quote}
  512. \newpage
  513. \section{Gesture tracker registration}
  514. \label{sec:tracker-registration}
  515. When a gesture handler is added to an event area by an application, the event
  516. area must create a gesture tracker that detects the corresponding gesture. To
  517. do this, the architecture must be aware of the existing gesture trackers and
  518. the gestures they support. The architecture provides a registration system for
  519. gesture trackers. Each gesture tracker implementation contains a list of
  520. supported gesture types. These gesture types are mapped to the gesture tracker
  521. class by the registration system. When an event area needs to create a gesture
  522. tracker for a gesture type that is not yet being detected, the class name of
  523. the new created gesture tracker is loaded from this map. Registration of a
  524. gesture tracker is very straight-forward, as shown by the following Python
  525. code:
  526. \begin{verbatim}
  527. from trackers import register_tracker
  528. # Create a gesture tracker implementation
  529. class TapTracker(GestureTracker):
  530. supported_gestures = ["tap", "single_tap", "double_tap"]
  531. # Methods for gesture detection go here
  532. # Register the gesture tracker with the architecture
  533. register_tracker(TapTracker)
  534. \end{verbatim}
  535. \section{Basic tracker}
  536. \label{sec:basictracker}
  537. The ``basic tracker'' implementation exists only to provide access to low-level
  538. events in an application. Low-level events are only handled by gesture
  539. trackers, not by the application itself. Therefore, the basic tracker maps
  540. \emph{point\_\{down,move,up\}} events to equally named gestures that can be
  541. handled by the application.
  542. \section{Tap tracker}
  543. \label{sec:taptracker}
  544. The ``tap tracker'' detects three types of tap gestures:
  545. \begin{enumerate}
  546. \item The basic \emph{tap} gesture is triggered when a touch point releases
  547. the touch surface within a certain time and distance of its initial
  548. position. When a \emph{point\_down} event is received, its location is
  549. saved along with the current timestamp. On the next \emph{point\_up}
  550. event of the touch point, the difference in time and position with its
  551. saved values are compared to predefined thresholds to determine whether
  552. a \emph{tap} gesture should be triggered.
  553. \item A \emph{double tap} gesture consists of two sequential \emph{tap}
  554. gestures that are located within a certain distance of each other, and
  555. occur within a certain time window. When a \emph{tap} gesture is
  556. triggered, the tracker saves it as the ``last tap'' along with the
  557. current timestamp. When another \emph{tap} gesture is triggered, its
  558. location and the current timestamp are compared to those of the ``last
  559. tap'' gesture to determine whether a \emph{double tap} gesture should
  560. be triggered. If so, the gesture is triggered at the location of the
  561. ``last tap'', because the second tap may be less accurate.
  562. \item A separate thread handles detection of \emph{single tap} gestures at
  563. a rate of thirty times per second. When the time since the ``last tap''
  564. exceeds the maximum time between two taps of a \emph{double tap}
  565. gesture, a \emph{single tap} gesture is triggered.
  566. \end{enumerate}
  567. The \emph{single tap} gesture exists to be able to make a distinction between
  568. single and double tap gestures. This distinction is not possible with the
  569. regular \emph{tap} gesture, since the first \emph{tap} gesture has already been
  570. handled by the application when the second \emph{tap} of a \emph{double tap}
  571. gesture is triggered.
  572. \section{Transformation tracker}
  573. \label{sec:transformationtracker}
  574. The transformation tracker triggers \emph{rotate}, \emph{pinch}, \emph{drag}
  575. and \emph{flick} gestures. These gestures use the centroid of all touch points.
  576. A \emph{rotate} gesture uses the difference in angle relative to the centroid
  577. of all touch points, and \emph{pinch} uses the difference in distance. Both
  578. values are normalized using division by the number of touch points $N$. A
  579. \emph{pinch} gesture contains a scale factor, and therefore uses a division of
  580. the current by the previous average distance to the centroid. Any movement of
  581. the centroid is used for \emph{drag} gestures. When a dragged touch point is
  582. released, a \emph{flick} gesture is triggered in the direction of the
  583. \emph{drag} gesture.
  584. Figure \ref{fig:transformationtracker} shows an example situation in which a
  585. touch point is moved, triggering a \emph{pinch} gesture, a \emph{rotate}
  586. gesture and a \emph{drag} gesture.
  587. \transformationtracker
  588. The \emph{pinch} gesture in figure \ref{fig:pinchrotate} uses the ratio
  589. $d_2:d_1$ to calculate its $scale$ parameter. Note that the difference in
  590. distance $d_2 - d_1$ and the difference in angle $\alpha$ both relate to a
  591. single touch point. The \emph{pinch} and \emph{rotate} gestures that are
  592. triggered relate to all touch points, using the average of distances and
  593. angles. Since all except one of the touch points have not moved, their
  594. differences in distance and angle are zero. Thus, the averages can be
  595. calculated by dividing the differences in distance and angle of the moved touch
  596. point by the number of touch points $N$. The $scale$ parameter represents the
  597. scale relative to the previous situation, which results in the following
  598. formula:
  599. $$pinch.scale = \frac{d_1 + \frac{d_2 - d_1}{N}}{d_1}$$
  600. The angle used for the \emph{rotate} gesture is only divided by the number of
  601. touch points to obtain an average rotation of all touch points:
  602. $$rotate.angle = \frac{\alpha}{N}$$
  603. \chapter{Test applications}
  604. \label{chapter:test-applications}
  605. Two test case applications have been created to test if the design ``works'' in
  606. a practical application, and to detect its flaws. One application is mainly
  607. used to test the gesture tracker implementations. The second application uses
  608. multiple event areas in a tree structure, demonstrating event delegation and
  609. propagation. The second application also defines a custom gesture tracker.
  610. \section{Full screen Pygame application}
  611. %The goal of this application was to experiment with the TUIO
  612. %protocol, and to discover requirements for the architecture that was to be
  613. %designed. When the architecture design was completed, the application was rewritten
  614. %using the new architecture components. The original variant is still available
  615. %in the ``experimental'' folder of the Git repository \cite{gitrepos}.
  616. An implementation of the detection of some simple multi-touch gestures (single
  617. tap, double tap, rotation, pinch and drag) using Processing\footnote{Processing
  618. is a Java-based programming environment with an export possibility for Android.
  619. See also \cite{processing}.} can be found in a forum on the Processing website
  620. \cite{processingMT}. The application has been ported to Python and adapted to
  621. receive input from the TUIO protocol. The implementation is fairly simple, but
  622. it yields some appealing results (see figure \ref{fig:draw}). In the original
  623. application, the detection logic of all gestures is combined in a single class
  624. file. As predicted by the GART article \cite{GART}, this leads to over-complex
  625. code that is difficult to read and debug.
  626. The original application code consists of two main classes. The ``multi-touch
  627. server'' starts a ``TUIO server'' that translates TUIO events to
  628. ``point\_\{down,move,up\}'' events. Detection of ``tap'' and ``double tap''
  629. gestures is performed immediately after an event is received. Other gesture
  630. detection runs in a separate thread, using the following loop:
  631. \begin{verbatim}
  632. 60 times per second do:
  633. detect `single tap' based on the time since the latest `tap' gesture
  634. if points have been moved, added or removed since last iteration do:
  635. calculate the centroid of all points
  636. detect `drag' using centroid movement
  637. detect `rotation' using average orientation of all points to centroid
  638. detect `pinch' using average distance of all points to centroid
  639. \end{verbatim}
  640. There are two problems with the implementation described above. In the first
  641. place, low-level events are not grouped before gesture detection. The gesture
  642. detection uses all events for a single gesture. Therefore, only one element at
  643. a time can be rotated/resized etc. (see also section \ref{sec:areas}).
  644. Secondly, all detection code is located in the same class file. To extend the
  645. application with new gestures, a programmer must extend the code in this class
  646. file and therefore understand its structure. Since the main loop calls specific
  647. gesture detection components explicitly in a certain order, the programmer must
  648. alter the main loop to call custom gesture detection code. This is a problem
  649. because this way of extending code is not scalable over time. The class file
  650. would become more and more complex when extended with new gestures. The two
  651. problems have been solved using event areas and gesture trackers from the
  652. reference implementation. The gesture detection code has been separated into
  653. two different gesture trackers, which are the ``tap'' and ``transformation''
  654. trackers mentioned in chapter \ref{chapter:implementation}.
  655. The positions of all touch objects and their centroid are drawn using the
  656. Pygame library. Since the Pygame library does not provide support to find the
  657. location of the display window, the root event area captures events in the
  658. entire screen surface. The application can be run either full screen or in
  659. windowed mode. If windowed, screen-wide gesture coordinates are mapped to the
  660. size of the Pyame window. In other words, the Pygame window always represents
  661. the entire touch surface. The output of the application can be seen in figure
  662. \ref{fig:draw}.
  663. \begin{figure}[h!]
  664. \center
  665. \includegraphics[scale=0.4]{data/pygame_draw.png}
  666. \caption{
  667. Output of the experimental drawing program. It draws all touch points
  668. and their centroid on the screen (the centroid is used for rotation and
  669. pinch detection). It also draws a green rectangle which responds to
  670. rotation and pinch events.
  671. }
  672. \label{fig:draw}
  673. \end{figure}
  674. \section{GTK+/Cairo application}
  675. \label{sec:testapp}
  676. The second test application uses the GIMP toolkit (GTK+) \cite{GTK} to create
  677. its user interface. The PyGTK library \cite{PyGTK} is used to address GTK+
  678. functions in the Python application. Since GTK+ defines a main event loop that
  679. is started in order to use the interface, the architecture implementation runs
  680. in a separate thread.
  681. The application creates a main window, whose size and position are synchronized
  682. with the root event area of the architecture. The synchronization is handled
  683. automatically by a \texttt{GtkEventWindow} object, which is a subclass of
  684. \texttt{gtk.Window}. This object serves as a layer that connects the event area
  685. functionality of the architecture to GTK+ windows. The following Python code
  686. captures the essence of the synchronization layer:
  687. \begin{verbatim}
  688. class GtkEventWindow(Window):
  689. def __init__(self, width, height):
  690. Window.__init__(self)
  691. # Create an event area to represent the GTK window in the gesture
  692. # detection architecture
  693. self.area = RectangularArea(0, 0, width, height)
  694. # The "configure-event" signal is triggered by GTK when the position or
  695. # size of the window are updated
  696. self.connect("configure-event", self.sync_area)
  697. def sync_area(self, win, event):
  698. # Synchronize the position and size of the event area with that of the
  699. # GTK window
  700. self.area.width = event.width
  701. self.area.height = event.height
  702. self.area.set_position(*event.get_coords())
  703. \end{verbatim}
  704. The application window contains a number of polygons which can be dragged,
  705. resized and rotated. Each polygon is represented by a separate event area to
  706. allow simultaneous interaction with different polygons. The main window also
  707. responds to transformation, by transforming all polygons. Additionally, tapping
  708. on a polygon changes its color. Double tapping on the application window
  709. toggles its modus between full screen and windowed.
  710. An ``overlay'' event area is used to detect all fingers currently touching the
  711. screen. The application defines a custom gesture tracker, called the ``hand
  712. tracker'', which is used by the overlay. The hand tracker uses distances
  713. between detected fingers to detect which fingers belong to the same hand (see
  714. section \ref{sec:handtracker} for details). The application draws a line from
  715. each finger to the hand it belongs to, as visible in figure \ref{fig:testapp}.
  716. \begin{figure}[h!]
  717. \center
  718. \includegraphics[scale=0.35]{data/testapp.png}
  719. \caption{
  720. Screenshot of the second test application. Two polygons can be dragged,
  721. rotated and scaled. Separate groups of fingers are recognized as hands,
  722. each hand is drawn as a centroid with a line to each finger.
  723. }
  724. \label{fig:testapp}
  725. \end{figure}
  726. To manage the propagation of events used for transformations and tapping, the
  727. application arranges its event areas in a tree structure as described in
  728. section \ref{sec:tree}. Each transformable event area has its own
  729. ``transformation tracker'', which stops the propagation of events used for
  730. transformation gestures. Because the propagation of these events is stopped,
  731. overlapping polygons do not cause a problem. Figure \ref{fig:testappdiagram}
  732. shows the tree structure used by the application.
  733. Note that the overlay event area, though covering the entire screen surface, is
  734. not used as the root of the event area tree. Instead, the overlay is placed on
  735. top of the application window (being a rightmost sibling of the application
  736. window event area in the tree). This is necessary, because the transformation
  737. trackers in the application window stop the propagation of events. The hand
  738. tracker needs to capture all events to be able to give an accurate
  739. representations of all fingers touching the screen Therefore, the overlay
  740. should delegate events to the hand tracker before they are stopped by a
  741. transformation tracker. Placing the overlay over the application window forces
  742. the screen event area to delegate events to the overlay event area first. The
  743. event area implementation delegates events to its children in right-to left
  744. order, because area's that are added to the tree later are assumed to be
  745. positioned over their previously added siblings.
  746. \testappdiagram
  747. \subsection{Hand tracker}
  748. \label{sec:handtracker}
  749. The hand tracker sees each touch point as a finger. Based on a predefined
  750. distance threshold, each finger is assigned to a hand. Each hand consists of a
  751. list of finger locations, and the centroid of those locations.
  752. When a new finger is detected on the touch surface (a \emph{point\_down} event),
  753. the distance from that finger to all hand centroids is calculated. The hand to
  754. which the distance is the shortest may be the hand that the finger belongs to.
  755. If the distance is larger than the predefined distance threshold, the finger is
  756. assumed to be a new hand and \emph{hand\_down} gesture is triggered. Otherwise,
  757. the finger is assigned to the closest hand. In both cases, a
  758. \emph{finger\_down} gesture is triggered.
  759. Each touch point is assigned an ID by the reference implementation. When the
  760. hand tracker assigns a finger to a hand after a \emph{point\_down} event, its
  761. touch point ID is saved in a hash map\footnote{In computer science, a hash
  762. table or hash map is a data structure that uses a hash function to map
  763. identifying values, known as keys (e.g., a person's name), to their associated
  764. values (e.g., their telephone number). Source: Wikipedia \cite{wikihashmap}.}
  765. with the \texttt{Hand} object. When a finger moves (a \emph{point\_move} event)
  766. or releases the touch surface (\emph{point\_up}), The corresponding hand is
  767. loaded from the hash map and triggers a \emph{finger\_move} or
  768. \emph{finger\_up} gesture. If a released finger is the last of a hand, that
  769. hand is removed with a \emph{hand\_up} gesture.
  770. \section{Results}
  771. \label{sec:results}
  772. The Pygame application is based on existing program code, which has been be
  773. broken up into the components of the architecture. The application incorporates
  774. the most common multi-touch gestures, such as tapping and transformation
  775. gestures. All features from the original application are still supported in the
  776. revised application, so the component-based architecture design does not
  777. propose a limiting factor. Rather than that, the program code has become more
  778. maintainable and extendable due to the modular setup. The gesture tracker-based
  779. design has even allowed the detection of tap and transformation gestures to be
  780. moved to the reference implementation of the architecture, whereas it was
  781. originally part of the test application.
  782. The GTK+ application uses a more extended tree structure to arrange its event
  783. areas, so that it can use the powerful concept of event propagation. The
  784. application does show that the construction of such a tree is not always
  785. straight-forward: the ``overlay'' event area covers the entire touch surface,
  786. but is not the root of the tree. Designing the tree structure requires an
  787. understanding of event propagation by the application developer.
  788. Some work goes into the synchronization of application widgets with their event
  789. areas. The GTK+ application defines a class that acts as a synchronization
  790. layer between the application window and its event area in the architecture.
  791. This synchronization layer could be used in other applications that use GTK+.
  792. The ``hand tracker'' used by the GTK+ application is not incorporated within
  793. the architecture. The use of gesture trackers by the architecture allows the
  794. application to add new gestures using a single line of code (see section
  795. \ref{sec:tracker-registration}).
  796. Apart from the synchronization of event areas with application widgets, both
  797. applications have no trouble using the architecture implementation in
  798. combination with their application framework. Thus, the architecture can be
  799. used alongside existing application frameworks.
  800. \chapter{Conclusions}
  801. \label{chapter:conclusions}
  802. To support different devices, there must be an abstraction of device drivers so
  803. that gesture detection can be performed on a common set of low-level events.
  804. This abstraction is provided by the event driver.
  805. Gestures must be able to occur within a certain area of a touch surface that is
  806. covered by an application widget. Therefore, low-level events must be divided
  807. into separate groups before any gesture detection is performed. Event areas
  808. provide a way to accomplish this. Overlapping event areas are ordered in a tree
  809. structure that can be synchronized with the widget tree of the application.
  810. Some applications require the ability to handle an event exclusively for an
  811. event area. An event propagation mechanism provides a solution for this: the
  812. propagation of an event in the tree structure can be stopped after gesture
  813. detection in an event area. \\
  814. Section \ref{sec:testapp} shows that the structure of the event area tree is
  815. not necessarily equal to that of the application widget tree. The design of the
  816. event area tree structure in complex situations requires an understanding of
  817. event propagation by the application programmer.
  818. The detection of complex gestures can be approached in several ways. If
  819. explicit detection code for different gesture is not managed well, program code
  820. can become needlessly complex. A tracker-based design, in which the detection
  821. of different types of gesture is separated into different gesture trackers,
  822. reduces complexity and provides a way to extend a set of detection algorithms.
  823. The use of gesture trackers is flexible, e.g. complex detection algorithms such
  824. as machine learning can be used simultaneously with other gesture trackers that
  825. use explicit detection code. Also, the modularity of this design allows
  826. extension of the set of supported gestures. Section \ref{sec:testapp}
  827. demonstrates this extendability.
  828. A true generic architecture should provide a communication interface that
  829. provides support for multiple programming languages. A daemon implementation as
  830. described by section \ref{sec:daemon} is an example of such in interface. With
  831. this feature, the architecture can be used in combination with a wide range of
  832. application frameworks.
  833. \chapter{Suggestions for future work}
  834. \label{chapter:futurework}
  835. \section{A generic method for grouping events}
  836. \label{sec:eventfilter}
  837. As mentioned in section \ref{sec:areas}, the concept of an event area is based
  838. on the assumption that the set of originating events that form a particular
  839. gesture, can be determined exclusively based on the location of the events.
  840. Since this thesis focuses on multi-touch surface based devices, and every
  841. object on a multi-touch surface has a position, this assumption is valid.
  842. However, the design of the architecture is meant to be more generic; to provide
  843. a structured design for managing gesture detection.
  844. An in-air gesture detection device, such as the Microsoft Kinect \cite{kinect},
  845. provides 3D positions. Some multi-touch tables work with a camera that can also
  846. determine the shape and rotational orientation of objects touching the surface.
  847. For these devices, events delegated by the event driver have more parameters
  848. than a 2D position alone. The term ``event area'' is not suitable to describe a
  849. group of events that consist of these parameters.
  850. A more generic term for a component that groups similar events is an
  851. \emph{event filter}. The concept of an event filter is based on the same
  852. principle as event areas, which is the assumption that gestures are formed from
  853. a subset of all low-level events. However, an event filter takes all parameters
  854. of an event into account. An application on the camera-based multi-touch table
  855. could be to group all objects that are triangular into one filter, and all
  856. rectangular objects into another. Or, to separate small finger tips from large
  857. ones to be able to recognize whether a child or an adult touches the table.
  858. \section{Using a state machine for gesture detection}
  859. All gesture trackers in the reference implementation are based on the explicit
  860. analysis of events. Gesture detection is a widely researched subject, and the
  861. separation of detection logic into different trackers allows for multiple types
  862. of gesture detection in the same architecture. An interesting question is
  863. whether multi-touch gestures can be described in a formal, generic way so that
  864. explicit detection code can be avoided.
  865. \cite{GART} and \cite{conf/gw/RigollKE97} propose the use of machine learning
  866. to recognize gestures. To use machine learning, a set of input events forming a
  867. particular gesture must be represented as a feature vector. A learning set
  868. containing a set of feature vectors that represent some gesture ``teaches'' the
  869. machine what the feature of the gesture looks like.
  870. An advantage of using explicit gesture detection code is the fact that it
  871. provides a flexible way to specify the characteristics of a gesture, whereas
  872. the performance of feature vector-based machine learning is dependent on the
  873. quality of the learning set.
  874. A better method to describe a gesture might be to specify its features as a
  875. ``signature''. The parameters of such a signature must be be based on low-level
  876. events. When a set of input events matches the signature of some gesture, the
  877. gesture can be triggered. A gesture signature should be a complete description
  878. of all requirements the set of events must meet to form the gesture.
  879. A way to describe signatures on a multi-touch surface can be by the use of a
  880. state machine of its touch objects. The states of a simple touch point could be
  881. ${down, move, hold, up}$ to indicate respectively that a point is put down, is
  882. being moved, is held on a position for some time, and is released. In this
  883. case, a ``drag'' gesture can be described by the sequence $down - move - up$
  884. and a ``select'' gesture by the sequence $down - hold$. If the set of states is
  885. not sufficient to describe a desired gesture, a developer can add additional
  886. states. For example, to be able to make a distinction between an element being
  887. ``dragged'' or ``thrown'' in some direction on the screen, two additional
  888. states can be added: ${start, stop}$ to indicate that a point starts and stops
  889. moving. The resulting state transitions are sequences $down - start - move -
  890. stop - up$ and $down - start - move - up$ (the latter does not include a $stop$
  891. to indicate that the element must keep moving after the gesture had been
  892. performed). The two sequences distinguish a ``drag'' gesture from a ``flick''
  893. gesture respectively.
  894. An additional way to describe even more complex gestures is to use other
  895. gestures in a signature. An example is to combine $select - drag$ to specify
  896. that an element must be selected before it can be dragged.
  897. The application of a state machine to describe multi-touch gestures is a
  898. subject well worth exploring in the future.
  899. \section{Daemon implementation}
  900. Section \ref{sec:daemon} proposes the use of a network protocol to communicate
  901. between an architecture implementation and (multiple) gesture-based
  902. applications, as illustrated in figure \ref{fig:daemon}. The reference
  903. implementation does not support network communication. If the architecture
  904. design is to become successful in the future, the implementation of network
  905. communication is a must. ZeroMQ (or $\emptyset$MQ) \cite{ZeroMQ} is a
  906. high-performance software library with support for a wide range of programming
  907. languages. A future implementation can use this library as the basis for its
  908. communication layer.
  909. Ideally, a user can install a daemon process containing the architecture so
  910. that it is usable for any gesture-based application on the device. Applications
  911. that use the architecture can specify it as being a software dependency, or
  912. include it in a software distribution.
  913. If a final implementation of the architecture is ever released, a good idea
  914. would be to do so within a community of application developers. A community can
  915. contribute to a central database of gesture trackers, making the interaction
  916. from their applications available for use in other applications.
  917. \bibliographystyle{plain}
  918. \bibliography{report}{}
  919. \end{document}