report.tex 42 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807
  1. \documentclass[twoside,openright]{uva-bachelor-thesis}
  2. \usepackage[english]{babel}
  3. \usepackage[utf8]{inputenc}
  4. \usepackage{hyperref,graphicx,tikz,subfigure,float}
  5. % Link colors
  6. \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen}
  7. % Title Page
  8. \title{A generic architecture for gesture-based interaction}
  9. \author{Taddeüs Kroes}
  10. \supervisors{Dr. Robert G. Belleman (UvA)}
  11. \signedby{Dr. Robert G. Belleman (UvA)}
  12. \begin{document}
  13. % Title page
  14. \maketitle
  15. \begin{abstract}
  16. Device drivers provide a primitive set of messages. Applications that use
  17. complex gesture-based interaction need to translate these events to complex
  18. gestures, and map these gestures to elements in an application. This paper
  19. presents a generic architecture for the detection of complex gestures in an
  20. application. The architecture translates driver-specific messages to a
  21. common set of ``events''. The events are then delegated to a tree of
  22. ``areas'', which are used to group events and assign it to an element in
  23. the application. Gesture detection is performed on a group of events
  24. assigned to an area, using detection units called ``gesture tackers''. An
  25. implementation of the architecture should run as a daemon process, serving
  26. gestures to multiple applications at the same time. A reference
  27. implementation and two test case applications have been created to test the
  28. effectiveness of the architecture design.
  29. \end{abstract}
  30. % Set paragraph indentation
  31. \parindent 0pt
  32. \parskip 1.5ex plus 0.5ex minus 0.2ex
  33. % Table of content on separate page
  34. \tableofcontents
  35. \chapter{Introduction}
  36. \label{chapter:introduction}
  37. Surface-touch devices have evolved from pen-based tablets to single-touch
  38. trackpads, to multi-touch devices like smartphones and tablets. Multi-touch
  39. devices enable a user to interact with software using hand gestures, making the
  40. interaction more expressive and intuitive. These gestures are more complex than
  41. primitive ``click'' or ``tap'' events that are used by single-touch devices.
  42. Some examples of more complex gestures are ``pinch''\footnote{A ``pinch''
  43. gesture is formed by performing a pinching movement with multiple fingers on a
  44. multi-touch surface. Pinch gestures are often used to zoom in or out on an
  45. object.} and ``flick''\footnote{A ``flick'' gesture is the act of grabbing an
  46. object and throwing it in a direction on a touch surface, giving it momentum to
  47. move for some time after the hand releases the surface.} gestures.
  48. The complexity of gestures is not limited to navigation in smartphones. Some
  49. multi-touch devices are already capable of recognizing objects touching the
  50. screen \cite[Microsoft Surface]{mssurface}. In the near future, touch screens
  51. will possibly be extended or even replaced with in-air interaction (Microsoft's
  52. Kinect \cite{kinect} and the Leap \cite{leap}).
  53. The interaction devices mentioned above generate primitive events. In the case
  54. of surface-touch devices, these are \emph{down}, \emph{move} and \emph{up}
  55. events. Application programmers who want to incorporate complex, intuitive
  56. gestures in their application face the challenge of interpreting these
  57. primitive events as gestures. With the increasing complexity of gestures, the
  58. complexity of the logic required to detect these gestures increases as well.
  59. This challenge limits, or even deters the application developer to use complex
  60. gestures in an application.
  61. The main question in this research project is whether a generic architecture
  62. for the detection of complex interaction gestures can be designed, with the
  63. capability of managing the complexity of gesture detection logic. The ultimate
  64. goal would be to create an implementation of this architecture that can be
  65. extended to support a wide range of complex gestures. With the existence of
  66. such an implementation, application developers do not need to reinvent gesture
  67. detection for every new gesture-based application.
  68. Application frameworks for surface-touch devices, such as Nokia's Qt \cite{qt},
  69. do already include the detection of commonly used gestures like \emph{pinch}
  70. gestures. However, this detection logic is dependent on the application
  71. framework. Consequently, an application developer who wants to use multi-touch
  72. interaction in an application is forced to use an application framework that
  73. includes support for multi-touch gestures. Moreover, the set of supported
  74. gestures is limited by the application framework of choice. To incorporate a
  75. custom event in an application, the application developer needs to extend the
  76. framework. This requires extensive knowledge of the framework's architecture.
  77. Also, if the same gesture is needed in another application that is based on
  78. another framework, the detection logic has to be translated for use in that
  79. framework. Nevertheless, application frameworks are a necessity when it comes
  80. to fast, cross-platform development. A generic architecture design should aim
  81. to be compatible with existing frameworks, and provide a way to detect and
  82. extend gestures independent of the framework.
  83. Application frameworks are written in a specific programming language. To
  84. support multiple frameworks and programming languages, the architecture should
  85. be accessible for applications using a language-independent method of
  86. communication. This intention leads towards the concept of a dedicated gesture
  87. detection application that serves gestures to multiple applications at the same
  88. time.
  89. The scope of this thesis is limited to the detection of gestures on multi-touch
  90. surface devices. It presents a design for a generic gesture detection
  91. architecture for use in multi-touch based applications. A reference
  92. implementation of this design is used in some test case applications, whose
  93. goal is to test the effectiveness of the design and detect its shortcomings.
  94. \section{Structure of this document}
  95. % TODO: pas als thesis af is
  96. \chapter{Related work}
  97. \section{Gesture and Activity Recognition Toolkit}
  98. The Gesture and Activity Recognition Toolkit (GART) \cite{GART} is a
  99. toolkit for the development of gesture-based applications. The toolkit
  100. states that the best way to classify gestures is to use machine learning.
  101. The programmer trains a program to recognize using the machine learning
  102. library from the toolkit. The toolkit contains a callback mechanism that
  103. the programmer uses to execute custom code when a gesture is recognized.
  104. Though multi-touch input is not directly supported by the toolkit, the
  105. level of abstraction does allow for it to be implemented in the form of a
  106. ``touch'' sensor.
  107. The reason to use machine learning is the statement that gesture detection
  108. ``is likely to become increasingly complex and unmanageable'' when using a
  109. set of predefined rules to detect whether some sensor input can be seen as
  110. a specific gesture. This statement is not necessarily true. If the
  111. programmer is given a way to separate the detection of different types of
  112. gestures and flexibility in rule definitions, over-complexity can be
  113. avoided.
  114. \section{Gesture recognition implementation for Windows 7}
  115. The online article \cite{win7touch} presents a Windows 7 application,
  116. written in Microsofts .NET. The application shows detected gestures in a
  117. canvas. Gesture trackers keep track of stylus locations to detect specific
  118. gestures. The event types required to track a touch stylus are ``stylus
  119. down'', ``stylus move'' and ``stylus up'' events. A
  120. \texttt{GestureTrackerManager} object dispatches these events to gesture
  121. trackers. The application supports a limited number of pre-defined
  122. gestures.
  123. An important observation in this application is that different gestures are
  124. detected by different gesture trackers, thus separating gesture detection
  125. code into maintainable parts.
  126. \section{Analysis of related work}
  127. The simple Processing implementation of multi-touch events provides most of
  128. the functionality that can be found in existing multi-touch applications.
  129. In fact, many applications for mobile phones and tablets only use tap and
  130. scroll events. For this category of applications, using machine learning
  131. seems excessive. Though the representation of a gesture using a feature
  132. vector in a machine learning algorithm is a generic and formal way to
  133. define a gesture, a programmer-friendly architecture should also support
  134. simple, ``hard-coded'' detection code. A way to separate different pieces
  135. of gesture detection code, thus keeping a code library manageable and
  136. extendable, is to user different gesture trackers.
  137. \chapter{Design}
  138. \label{chapter:design}
  139. % Diagrams are defined in a separate file
  140. \input{data/diagrams}
  141. \section{Introduction}
  142. This chapter describes the realization of a design for the generic
  143. multi-touch gesture detection architecture. The architecture is represented
  144. as diagram of relations between different components. Sections
  145. \ref{sec:driver-support} to \ref{sec:daemon} define requirements for the
  146. architecture, and extend the diagram with components that meet these
  147. requirements. Section \ref{sec:example} describes an example usage of the
  148. architecture in an application.
  149. The input of the architecture comes from a multi-touch device driver.
  150. The task of the architecture is to translate this input to multi-touch
  151. gestures that are used by an application, as illustrated in figure
  152. \ref{fig:basicdiagram}. In the course of this chapter, the diagram is
  153. extended with the different components of the architecture.
  154. \basicdiagram
  155. \section{Supporting multiple drivers}
  156. \label{sec:driver-support}
  157. The TUIO protocol \cite{TUIO} is an example of a driver that can be used by
  158. multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate
  159. low-level touch events (see appendix \ref{app:tuio} for more details).
  160. These messages are specific to the API of the TUIO protocol. Other drivers
  161. may use very different messages types. To support more than one driver in
  162. the architecture, there must be some translation from driver-specific
  163. messages to a common format for primitive touch events. After all, the
  164. gesture detection logic in a ``generic'' architecture should not be
  165. implemented based on driver-specific messages. The event types in this
  166. format should be chosen so that multiple drivers can trigger the same
  167. events. If each supported driver would add its own set of event types to
  168. the common format, it the purpose of being ``common'' would be defeated.
  169. A minimal expectation for a touch device driver is that it detects simple
  170. touch points, with a ``point'' being an object at an $(x, y)$ position on
  171. the touch surface. This yields a basic set of events: $\{point\_down,
  172. point\_move, point\_up\}$.
  173. The TUIO protocol supports fiducials\footnote{A fiducial is a pattern used
  174. by some touch devices to identify objects.}, which also have a rotational
  175. property. This results in a more extended set: $\{point\_down, point\_move,
  176. point\_up, object\_down, object\_move, object\_up,\\ object\_rotate\}$.
  177. Due to their generic nature, the use of these events is not limited to the
  178. TUIO protocol. Another driver that can keep apart rotated objects from
  179. simple touch points could also trigger them.
  180. The component that translates driver-specific messages to common events,
  181. will be called the \emph{event driver}. The event driver runs in a loop,
  182. receiving and analyzing driver messages. When a sequence of messages is
  183. analyzed as an event, the event driver delegates the event to other
  184. components in the architecture for translation to gestures. This
  185. communication flow is illustrated in figure \ref{fig:driverdiagram}.
  186. \driverdiagram
  187. Support for a touch driver can be added by adding an event driver
  188. implementation. The choice of event driver implementation that is used in an
  189. application is dependent on the driver support of the touch device being
  190. used.
  191. Because driver implementations have a common output format in the form of
  192. events, multiple event drivers can run at the same time (see figure
  193. \ref{fig:multipledrivers}).
  194. \multipledriversdiagram
  195. \section{Restricting events to a screen area}
  196. \label{sec:areas}
  197. % TODO: in introduction: gestures zijn opgebouwd uit meerdere primitieven
  198. Touch input devices are unaware of the graphical input widgets rendered on
  199. screen and therefore generate events that simply identify the screen
  200. location at which an event takes place. In order to be able to direct a
  201. gesture to a particular widget on screen, an application programmer must
  202. restrict the occurrence of a gesture to the area of the screen covered by
  203. that widget. An important question is if the architecture should offer a
  204. solution to this problem, or leave it to the application developer to
  205. assign gestures to a widget.
  206. The latter case generates a problem when a gesture must be able to occur at
  207. different screen positions at the same time. Consider the example in figure
  208. \ref{fig:ex1}, where two squares must be able to be rotated independently
  209. at the same time. If the developer is left the task to assign a gesture to
  210. one of the squares, the event analysis component in figure
  211. \ref{fig:driverdiagram} receives all events that occur on the screen.
  212. Assuming that the rotation detection logic detects a single rotation
  213. gesture based on all of its input events, without detecting clusters of
  214. input events, only one rotation gesture can be triggered at the same time.
  215. When a user attempts to ``grab'' one rectangle with each hand, the events
  216. triggered by all fingers are combined to form a single rotation gesture
  217. instead of two separate gestures.
  218. \examplefigureone
  219. To overcome this problem, groups of events must be separated by the event
  220. analysis component before any detection logic is executed. An obvious
  221. solution for the given example is to incorporate this separation in the
  222. rotation detection logic itself, using a distance threshold that decides if
  223. an event should be added to an existing rotation gesture. Leaving the task
  224. of separating groups of events to detection logic leads to duplication of
  225. code. For instance, if the rotation gesture is replaced by a \emph{pinch}
  226. gesture that enlarges a rectangle, the detection logic that detects the
  227. pinch gesture would have to contain the same code that separates groups of
  228. events for different gestures. Also, a pinch gesture can be performed using
  229. fingers multiple hands as well, in which case the use of a simple distance
  230. threshold is insufficient. These examples show that gesture detection logic
  231. is hard to implement without knowledge about (the position of) the
  232. widget\footnote{``Widget'' is a name commonly used to identify an element
  233. of a graphical user interface (GUI).} that is receiving the gesture.
  234. Therefore, a better solution for the assignment of events to gesture
  235. detection is to make the gesture detection component aware of the locations
  236. of application widgets on the screen. To accomplish this, the architecture
  237. must contain a representation of the screen area covered by a widget. This
  238. leads to the concept of an \emph{area}, which represents an area on the
  239. touch surface in which events should be grouped before being delegated to a
  240. form of gesture detection. Examples of simple area implementations are
  241. rectangles and circles. However, area's could also be made to represent
  242. more complex shapes.
  243. An area groups events and assigns them to some piece of gesture detection
  244. logic. This possibly triggers a gesture, which must be handled by the
  245. client application. A common way to handle framework events in an
  246. application is a ``callback'' mechanism: the application developer binds a
  247. function to an event, that is called by the framework when the event
  248. occurs. Because of the familiarity of this concept with developers, the
  249. architecture uses a callback mechanism to handle gestures in an
  250. application. Since an area controls the grouping of events and thus the
  251. occurrence of gestures in an area, gesture handlers for a specific gesture
  252. type are bound to an area. Figure \ref{fig:areadiagram} shows the position
  253. of areas in the architecture.
  254. \areadiagram
  255. An area can be seen as an independent subset of a touch surface. Therefore,
  256. the parameters (coordinates) of events and gestures within an area should
  257. be relative to the area.
  258. Note that the boundaries of an area are only used to group events, not
  259. gestures. A gesture could occur outside the area that contains its
  260. originating events, as illustrated by the example in figure \ref{fig:ex2}.
  261. \examplefiguretwo
  262. A remark must be made about the use of areas to assign events the detection
  263. of some gesture. The concept of an ``area'' is based on the assumption that
  264. the set or originating events that form a particular gesture, can be
  265. determined based exclusively on the location of the events. This is a
  266. reasonable assumption for simple touch objects whose only parameter is a
  267. position, such as a pen or a human finger. However, more complex touch
  268. objects can have additional parameters, such as rotational orientation or
  269. color. An even more generic concept is the \emph{event filter}, which
  270. detects whether an event should be assigned to a particular piece of
  271. gesture detection based on all available parameters. This level of
  272. abstraction allows for constraints like ``Use all blue objects within a
  273. widget for rotation, and green objects for tapping.''. As mentioned in the
  274. introduction chapter [\ref{chapter:introduction}], the scope of this thesis
  275. is limited to multi-touch surface based devices, for which the \emph{area}
  276. concept suffices. Section \ref{sec:eventfilter} explores the possibility of
  277. areas to be replaced with event filters.
  278. \subsection{Area tree}
  279. \label{sec:tree}
  280. The most simple implementation of areas in the architecture is a list of
  281. areas. When the event driver delegates an event, it is delegated to gesture
  282. detection by each area that contains the event coordinates.
  283. If the architecture were to be used in combination with an application
  284. framework like GTK \cite{GTK}, each GTK widget that must receive gestures
  285. should have a mirroring area that synchronizes its position with that of
  286. the widget. Consider a panel with five buttons that all listen to a
  287. ``tap'' event. If the panel is moved as a result of movement of the
  288. application window, the position of each button has to be updated.
  289. This process is simplified by the arrangement of areas in a tree structure.
  290. A root area represents the panel, containing five subareas which are
  291. positioned relative to the root area. The relative positions do not need to
  292. be updated when the panel area changes its position. GUI frameworks, like
  293. GTK, use this kind of tree structure to manage widgets. A recommended first
  294. step when developing an application is to create some subclass of the area
  295. that synchronizes with the position of a widget from the GUI framework
  296. automatically.
  297. \section{Detecting gestures from events}
  298. \label{sec:gesture-detection}
  299. The events that are grouped by areas must be translated to complex gestures
  300. in some way. Gestures such as a button tap or the dragging of an object
  301. using one finger are easy to detect by comparing the positions of
  302. sequential $point\_down$ and $point\_move$ events.
  303. A way to detect more complex gestures is based on a sequence of input
  304. features is with the use of machine learning methods, such as Hidden Markov
  305. Models \footnote{A Hidden Markov Model (HMM) is a statistical model without
  306. a memory, it can be used to detect gestures based on the current input
  307. state alone.} \cite{conf/gw/RigollKE97}. A sequence of input states can be
  308. mapped to a feature vector that is recognized as a particular gesture with
  309. some probability. This type of gesture recognition is often used in video
  310. processing, where large sets of data have to be processed. Using an
  311. imperative programming style to recognize each possible sign in sign
  312. language detection is near impossible, and certainly not desirable.
  313. Sequences of events that are triggered by a multi-touch based surfaces are
  314. often of a manageable complexity. An imperative programming style is
  315. sufficient to detect many common gestures. The imperative programming style
  316. is also familiar and understandable for a wide range of application
  317. developers. Therefore, the aim is to use this programming style in the
  318. architecture implementation that is developed during this project.
  319. However, the architecture should not be limited to multi-touch surfaces
  320. alone. For example, the architecture should also be fit to be used in an
  321. application that detects hand gestures from video input.
  322. A problem with the imperative programming style is that the detection of
  323. different gestures requires different pieces of detection code. If this is
  324. not managed well, the detection logic is prone to become chaotic and
  325. over-complex.
  326. To manage complexity and support multiple methods of gesture detection, the
  327. architecture has adopted the tracker-based design as described by
  328. \cite{win7touch}. Different detection components are wrapped in separate
  329. gesture tracking units, or \emph{gesture trackers} The input of a gesture
  330. tracker is provided by an area in the form of events. When a gesture
  331. tracker detects a gesture, this gesture is triggered in the corresponding
  332. area. The area then calls the callbacks which are bound to the gesture
  333. type by the application. Figure \ref{fig:trackerdiagram} shows the position
  334. of gesture trackers in the architecture.
  335. \trackerdiagram
  336. The use of gesture trackers as small detection units provides extendability
  337. of the architecture. A developer can write a custom gesture tracker and
  338. register it in the architecture. The tracker can use any type of detection
  339. logic internally, as long as it translates events to gestures.
  340. An example of a possible gesture tracker implementation is a
  341. ``transformation tracker'' that detects rotation, scaling and translation
  342. gestures.
  343. \section{Reserving an event for a gesture}
  344. \label{sec:reserve-event}
  345. A problem occurs when areas overlap, as shown by figure
  346. \ref{fig:eventpropagation}. When the white square is rotated, the gray
  347. square should keep its current orientation. This means that events that are
  348. used for rotation of the white square, should not be used for rotation of
  349. the gray square. To achieve this, there must be some communication between
  350. the gesture trackers of the two squares. When an event in the white square
  351. is used for rotation, that event should not be used for rotation in the
  352. gray square. In other words, the event must be \emph{reserved} for the
  353. rotation gesture in the white square. In order to reserve an event, the
  354. event needs to be handled by the rotation tracker of the white before the
  355. rotation tracker of the grey square receives it. Otherwise, the gray square
  356. has already triggered a rotation gesture and it will be too late to reserve
  357. the event for rotation of the white square.
  358. When an object touches the touch surface, the event that is triggered
  359. should be delegated according to the order in which its corresponding areas
  360. are positioned over each other. The tree structure in which areas are
  361. arranged (see section \ref{sec:tree}), is an ideal tool to determine the
  362. order in which an event is delegated to different areas. Areas in the tree
  363. are positioned on top of their parent. An object touching the screen is
  364. essentially touching the deepest area in the tree that contains the
  365. triggered event. That area should be the first to delegate the event to its
  366. gesture trackers, and then move the event up in the tree to its ancestors.
  367. The movement of an event up in the area tree will be called \emph{event
  368. propagation}. To reserve an event for a particular gesture, a gesture
  369. tracker can stop its propagation. When propagation of an event is stopped,
  370. it will not be passed on the ancestor areas, thus reserving the event.
  371. The diagram in appendix \ref{app:eventpropagation} illustrates the use of
  372. event propagation, applied to the example of the white and gray squares.
  373. \section{Serving multiple applications}
  374. \label{sec:daemon}
  375. The design of the architecture is essentially complete with the components
  376. specified in this chapter. However, one specification has not yet been
  377. discussed: the ability address the architecture using a method of
  378. communication independent of the application programming language.
  379. If an application must start the architecture instance in a thread within
  380. the application itself, the architecture is required to be compatible with
  381. the programming language used to write the application. To overcome the
  382. language barrier, an instance of the architecture would have to run in a
  383. separate process.
  384. A common and efficient way of communication between two separate processes
  385. is through the use of a network protocol. In this particular case, the
  386. architecture can run as a daemon\footnote{``daemon'' is a name Unix uses to
  387. indicate that a process runs as a background process.} process, listening
  388. to driver messages and triggering gestures in registered applications.
  389. \vspace{-0.3em}
  390. \daemondiagram
  391. An advantage of a daemon setup is that is can serve multiple applications
  392. at the same time. Alternatively, each application that uses gesture
  393. interaction would start its own instance of the architecture in a separate
  394. process, which would be less efficient.
  395. \section{Example usage}
  396. \label{sec:example}
  397. This section describes an extended example to illustrate the data flow of
  398. the architecture. The example application listens to tap events on a button
  399. within an application window. The window also contains a draggable circle.
  400. The application window can be resized using \emph{pinch} gestures. Figure
  401. \ref{fig:examplediagram} shows the architecture created by the pseudo code
  402. below.
  403. \begin{verbatim}
  404. initialize GUI framework, creating a window and nessecary GUI widgets
  405. create a root area that synchronizes position and size with the application window
  406. define 'rotation' gesture handler and bind it to the root area
  407. create an area with the position and radius of the circle
  408. define 'drag' gesture handler and bind it to the circle area
  409. create an area with the position and size of the button
  410. define 'tap' gesture handler and bind it to the button area
  411. create a new event server and assign the created root area to it
  412. start the event server in a new thread
  413. start the GUI main loop in the current thread
  414. \end{verbatim}
  415. \examplediagram
  416. \chapter{Test applications}
  417. A reference implementation of the design is written in Python. Two test
  418. applications have been created to test if the design ``works'' in a practical
  419. application, and to detect its flaws. One application is mainly used to test
  420. the gesture tracker implementations. The other program uses areas in a tree,
  421. demonstrating event delegation and propagation.
  422. To test multi-touch interaction properly, a multi-touch device is required. The
  423. University of Amsterdam (UvA) has provided access to a multi-touch table from
  424. PQlabs. The table uses the TUIO protocol \cite{TUIO} to communicate touch
  425. events. See appendix \ref{app:tuio} for details regarding the TUIO protocol.
  426. %The reference implementation and its test applications are a Proof of Concept,
  427. %meant to show that the architecture design is effective.
  428. %that translates TUIO messages to some common multi-touch gestures.
  429. \section{Reference implementation}
  430. \label{sec:implementation}
  431. The reference implementation is written in Python and available at
  432. \cite{gitrepos}. The following component implementations are included:
  433. \textbf{Event drivers}
  434. \begin{itemize}
  435. \item TUIO driver, using only the support for simple touch points with an
  436. $(x, y)$ position.
  437. \end{itemize}
  438. \textbf{Gesture trackers}
  439. \begin{itemize}
  440. \item Basic tracker, supports $point\_down,~point\_move,~point\_up$ gestures.
  441. \item Tap tracker, supports $tap,~single\_tap,~double\_tap$ gestures.
  442. \item Transformation tracker, supports $rotate,~pinch,~drag$ gestures.
  443. \end{itemize}
  444. \textbf{Areas}
  445. \begin{itemize}
  446. \item Circular area
  447. \item Rectangular area
  448. \item Full screen area
  449. \end{itemize}
  450. The implementation does not include a network protocol to support the daemon
  451. setup as described in section \ref{sec:daemon}. Therefore, it is only usable in
  452. Python programs. Thus, the two test programs are also written in Python.
  453. The area implementations contain some geometric functions to determine whether
  454. an event should be delegated to an area. All gesture trackers have been
  455. implemented using an imperative programming style. Technical details about the
  456. implementation of gesture detection are described in appendix
  457. \ref{app:implementation-details}.
  458. \section{Full screen Pygame program}
  459. %The goal of this program was to experiment with the TUIO
  460. %protocol, and to discover requirements for the architecture that was to be
  461. %designed. When the architecture design was completed, the program was rewritten
  462. %using the new architecture components. The original variant is still available
  463. %in the ``experimental'' folder of the Git repository \cite{gitrepos}.
  464. An implementation of the detection of some simple multi-touch gestures (single
  465. tap, double tap, rotation, pinch and drag) using Processing\footnote{Processing
  466. is a Java-based programming environment with an export possibility for Android.
  467. See also \cite{processing}.} can be found in a forum on the Processing website
  468. \cite{processingMT}. The program has been ported to Python and adapted to
  469. receive input from the TUIO protocol. The implementation is fairly simple, but
  470. it yields some appealing results (see figure \ref{fig:draw}). In the original
  471. program, the detection logic of all gestures is combined in a single class
  472. file. As predicted by the GART article \cite{GART}, this leads to over-complex
  473. code that is difficult to read and debug.
  474. The application has been rewritten using the reference implementation of the
  475. architecture. The detection code is separated into two different gesture
  476. trackers, which are the ``tap'' and ``transformation'' trackers mentioned in
  477. section \ref{sec:implementation}.
  478. The application receives TUIO events and translates them to \emph{point\_down},
  479. \emph{point\_move} and \emph{point\_up} events. These events are then
  480. interpreted to be \emph{single tap}, \emph{double tap}, \emph{rotation} or
  481. \emph{pinch} gestures. The positions of all touch objects are drawn using the
  482. Pygame library. Since the Pygame library does not provide support to find the
  483. location of the display window, the root area captures events in the entire
  484. screens surface. The application can be run either full screen or in windowed
  485. mode. If windowed, screen-wide gesture coordinates are mapped to the size of
  486. the Pyame window. In other words, the Pygame window always represents the
  487. entire touch surface. The output of the program can be seen in figure
  488. \ref{fig:draw}.
  489. \begin{figure}[h!]
  490. \center
  491. \includegraphics[scale=0.4]{data/pygame_draw.png}
  492. \caption{Output of the experimental drawing program. It draws all touch
  493. points and their centroid on the screen (the centroid is used for rotation
  494. and pinch detection). It also draws a green rectangle which
  495. \label{fig:draw}
  496. responds to rotation and pinch events.}
  497. \end{figure}
  498. \section{GTK/Cairo program}
  499. The second test application uses the GIMP toolkit (GTK+) \cite{GTK} to create
  500. its user interface. Since GTK+ defines a main event loop that is started in
  501. order to use the interface, the architecture implementation runs in a separate
  502. thread. The application creates a main window, whose size and position are
  503. synchronized with the root area of the architecture.
  504. % TODO
  505. \emph{TODO: uitbreiden en screenshots erbij (dit programma is nog niet af)}
  506. \section{Discussion}
  507. % TODO
  508. \emph{TODO: Tekortkomingen aangeven die naar voren komen uit de tests}
  509. % Verschillende apparaten/drivers geven een ander soort primitieve events af.
  510. % Een vertaling van deze device-specifieke events naar een algemeen formaat van
  511. % events is nodig om gesture detection op een generieke manier te doen.
  512. % Door input van meerdere drivers door dezelfde event driver heen te laten gaan
  513. % is er ondersteuning voor meerdere apparaten tegelijkertijd.
  514. % Event driver levert low-level events. niet elke event hoort bij elke gesture,
  515. % dus moet er een filtering plaatsvinden van welke events bij welke gesture
  516. % horen. Areas geven de mogelijkheid hiervoor op apparaten waarvan het
  517. % filteren locatiegebonden is.
  518. % Het opsplitsten van gesture detection voor gesture trackers is een manier om
  519. % flexibel te zijn in ondersteunde types detection logic, en het beheersbaar
  520. % houden van complexiteit.
  521. \chapter{Suggestions for future work}
  522. \section{A generic method for grouping events}
  523. \label{sec:eventfilter}
  524. As mentioned in section \ref{sec:areas}, the concept of an \emph{area} is based
  525. on the assumption that the set or originating events that form a particular
  526. gesture, can be determined based exclusively on the location of the events.
  527. Since this thesis focuses on multi-touch surface based devices, and every
  528. object on a multi-touch surface has a position, this assumption is valid.
  529. However, the design of the architecture is meant to be more generic; to provide
  530. a structured design of managing gesture detection.
  531. An in-air gesture detection device, such as the Microsoft Kinect \cite{kinect},
  532. provides 3D positions. Some multi-touch tables work with a camera that can also
  533. determine the shape and rotational orientation of objects touching the surface.
  534. For these devices, events delegated by the event driver have more parameters
  535. than a 2D position alone. The term ``area'' is not suitable to describe a group
  536. of events that consist of these parameters.
  537. A more generic term for a component that groups similar events is the
  538. \emph{event filter}. The concept of an event filter is based on the same
  539. principle as areas, which is the assumption that gestures are formed from a
  540. subset of all events. However, an event filter takes all parameters of an event
  541. into account. An application on the camera-based multi-touch table could be to
  542. group all objects that are triangular into one filter, and all rectangular
  543. objects into another. Or, to separate small finger tips from large ones to be
  544. able to recognize whether a child or an adult touches the table.
  545. \section{Using a state machine for gesture detection}
  546. All gesture trackers in the reference implementation are based on the explicit
  547. analysis of events. Gesture detection is a widely researched subject, and the
  548. separation of detection logic into different trackers allows for multiple types
  549. of gesture detection in the same architecture. An interesting question is
  550. whether multi-touch gestures can be described in a formal way so that explicit
  551. detection code can be avoided.
  552. \cite{GART} and \cite{conf/gw/RigollKE97} propose the use of machine learning
  553. to recognizes gestures. To use machine learning, a set of input events forming
  554. a particular gesture must be represented as a feature vector. A learning set
  555. containing a set of feature vectors that represent some gesture ``teaches'' the
  556. machine what the feature of the gesture looks like.
  557. An advantage of using explicit gesture detection code is the fact that it
  558. provides a flexible way to specify the characteristics of a gesture, whereas
  559. the performance of feature vector-based machine learning is dependent on the
  560. quality of the learning set.
  561. A better method to describe a gesture might be to specify its features as a
  562. ``signature''. The parameters of such a signature must be be based on input
  563. events. When a set of input events matches the signature of some gesture, the
  564. gesture is be triggered. A gesture signature should be a complete description
  565. of all requirements the set of events must meet to form the gesture.
  566. A way to describe signatures on a multi-touch surface can be by the use of a
  567. state machine of its touch objects. The states of a simple touch point could be
  568. ${down, move, up, hold}$ to indicate respectively that a point is put down, is
  569. being moved, is held on a position for some time, and is released. In this
  570. case, a ``drag'' gesture can be described by the sequence $down - move - up$
  571. and a ``select'' gesture by the sequence $down - hold$. If the set of states is
  572. not sufficient to describe a desired gesture, a developer can add additional
  573. states. For example, to be able to make a distinction between an element being
  574. ``dragged'' or ``thrown'' in some direction on the screen, two additional
  575. states can be added: ${start, stop}$ to indicate that a point starts and stops
  576. moving. The resulting state transitions are sequences $down - start - move -
  577. stop - up$ and $down - start - move - up$ (the latter does not include a $stop$
  578. to indicate that the element must keep moving after the gesture had been
  579. performed).
  580. An additional way to describe even more complex gestures is to use other
  581. gestures in a signature. An example is to combine $select - drag$ to specify
  582. that an element must be selected before it can be dragged.
  583. The application of a state machine to describe multi-touch gestures is an
  584. subject well worth exploring in the future.
  585. \section{Daemon implementation}
  586. Section \ref{sec:daemon} proposes the usage of a network protocol to
  587. communicate between an architecture implementation and (multiple) gesture-based
  588. applications, as illustrated in figure \ref{fig:daemon}. The reference
  589. implementation does not support network communication. If the architecture
  590. design is to become successful in the future, the implementation of network
  591. communication is a must. ZeroMQ (or $\emptyset$MQ) \cite{ZeroMQ} is a
  592. high-performance software library with support for a wide range of programming
  593. languages. A good basis for a future implementation could use this library as
  594. the basis for its communication layer.
  595. If an implementation of the architecture will be released, a good idea would be
  596. to do so within a community of application developers. A community can
  597. contribute to a central database of gesture trackers, making the interaction
  598. from their applications available for use other applications.
  599. Ideally, a user can install a daemon process containing the architecture so
  600. that it is usable for any gesture-based application on the device. Applications
  601. that use the architecture can specify it as being a software dependency, or
  602. include it in a software distribution.
  603. \bibliographystyle{plain}
  604. \bibliography{report}{}
  605. \appendix
  606. \chapter{The TUIO protocol}
  607. \label{app:tuio}
  608. The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible
  609. objects, such as fingers or objects on a multi-touch table. Object information
  610. is sent to the TUIO UDP port (3333 by default).
  611. For efficiency reasons, the TUIO protocol is encoded using the Open Sound
  612. Control \cite[OSC]{OSC} format. An OSC server/client implementation is
  613. available for Python: pyOSC \cite{pyOSC}.
  614. A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}.
  615. However, the execution of an example script yields an error regarding Python's
  616. built-in \texttt{socket} library. Therefore, the reference implementation uses
  617. the pyOSC package to receive TUIO messages.
  618. The two most important message types of the protocol are ALIVE and SET
  619. messages. An ALIVE message contains the list of session id's that are currently
  620. ``active'', which in the case of multi-touch a table means that they are
  621. touching the screen. A SET message provides geometric information of a session
  622. id, such as position, velocity and acceleration.
  623. Each session id represents an object. The only type of objects on the
  624. multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y)
  625. position on the screen.
  626. ALIVE messages can be used to determine when an object touches and releases the
  627. screen. For example, if a session id was in the previous message but not in the
  628. current, The object it represents has been lifted from the screen.
  629. SET provide information about movement. In the case of simple (x, y) positions,
  630. only the movement vector of the position itself can be calculated. For more
  631. complex objects such as fiducials, arguments like rotational position and
  632. acceleration are also included.
  633. ALIVE and SET messages can be combined to create ``point down'', ``point move''
  634. and ``point up'' events (as used by the Windows 7 implementation
  635. \cite{win7touch}).
  636. TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left
  637. top corner of the screen and $(1.0, 1.0)$ the right bottom corner. To focus
  638. events within a window, a translation to window coordinates is required in the
  639. client application, as stated by the online specification
  640. \cite{TUIO_specification}:
  641. \begin{quote}
  642. In order to compute the X and Y coordinates for the 2D profiles a TUIO
  643. tracker implementation needs to divide these values by the actual sensor
  644. dimension, while a TUIO client implementation consequently can scale these
  645. values back to the actual screen dimension.
  646. \end{quote}
  647. \chapter{Diagram demonstrating event propagation}
  648. \label{app:eventpropagation}
  649. \eventpropagationfigure
  650. \chapter{Gesture detection in the reference implementation}
  651. \label{app:implementation-details}
  652. Both rotation and pinch use the centroid of all touch points. A \emph{rotation}
  653. gesture uses the difference in angle relative to the centroid of all touch
  654. points, and \emph{pinch} uses the difference in distance. Both values are
  655. normalized using division by the number of touch points. A pinch event contains
  656. a scale factor, and therefore uses a division of the current by the previous
  657. average distance to the centroid.
  658. % TODO
  659. \emph{TODO: rotatie en pinch gaan iets anders/uitgebreider worden beschreven.}
  660. \end{document}