report.tex 34 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659
  1. \documentclass[twoside,openright]{uva-bachelor-thesis}
  2. \usepackage[english]{babel}
  3. \usepackage[utf8]{inputenc}
  4. \usepackage{hyperref,graphicx,tikz,subfigure,float,lipsum}
  5. % Link colors
  6. \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen}
  7. % Title Page
  8. \title{A generic architecture for gesture-based interaction}
  9. \author{Taddeüs Kroes}
  10. \supervisors{Dr. Robert G. Belleman (UvA)}
  11. \signedby{Dr. Robert G. Belleman (UvA)}
  12. \begin{document}
  13. % Title page
  14. \maketitle
  15. \begin{abstract}
  16. % TODO
  17. \end{abstract}
  18. % Set paragraph indentation
  19. \parindent 0pt
  20. \parskip 1.5ex plus 0.5ex minus 0.2ex
  21. % Table of content on separate page
  22. \tableofcontents
  23. \chapter{Introduction}
  24. \label{chapter:introduction}
  25. Surface-touch devices have evolved from pen-based tablets to single-touch
  26. trackpads, to multi-touch devices like smartphones and tablets. Multi-touch
  27. devices enable a user to interact with software using hand gestures, making the
  28. interaction more expressive and intuitive. These gestures are more complex than
  29. primitive ``click'' or ``tap'' events that are used by single-touch devices.
  30. Some examples of more complex gestures are ``pinch''\footnote{A ``pinch''
  31. gesture is formed by performing a pinching movement with multiple fingers on a
  32. multi-touch surface. Pinch gestures are often used to zoom in or out on an
  33. object.} and ``flick''\footnote{A ``flick'' gesture is the act of grabbing an
  34. object and throwing it in a direction on a touch surface, giving it momentum to
  35. move for some time after the hand releases the surface.} gestures.
  36. The complexity of gestures is not limited to navigation in smartphones. Some
  37. multi-touch devices are already capable of recognizing objects touching the
  38. screen \cite[Microsoft Surface]{mssurface}. In the near future, touch screens
  39. will possibly be extended or even replaced with in-air interaction (Microsoft's
  40. Kinect \cite{kinect} and the Leap \cite{leap}).
  41. The interaction devices mentioned above generate primitive events. In the case
  42. of surface-touch devices, these are \emph{down}, \emph{move} and \emph{up}
  43. events. Application programmers who want to incorporate complex, intuitive
  44. gestures in their application face the challenge of interpreting these
  45. primitive events as gestures. With the increasing complexity of gestures, the
  46. complexity of the logic required to detect these gestures increases as well.
  47. This challenge limits, or even deters the application developer to use complex
  48. gestures in an application.
  49. The main question in this research project is whether a generic architecture
  50. for the detection of complex interaction gestures can be designed, with the
  51. capability of managing the complexity of gesture detection logic. The ultimate
  52. goal would be to create an implementation of this architecture that can be
  53. extended to support a wide range of complex gestures. With the existence of
  54. such an implementation, application developers do not need to reinvent gesture
  55. detection for every new gesture-based application.
  56. Application frameworks for surface-touch devices, such as Nokia's Qt \cite{qt},
  57. do already include the detection of commonly used gestures like \emph{pinch}
  58. gestures. However, this detection logic is dependent on the application
  59. framework. Consequently, an application developer who wants to use multi-touch
  60. interaction in an application is forced to use an application framework that
  61. includes support for multi-touch gestures. Moreover, the set of supported
  62. gestures is limited by the application framework of choice. To incorporate a
  63. custom event in an application, the application developer needs to extend the
  64. framework. This requires extensive knowledge of the framework's architecture.
  65. Also, if the same gesture is needed in another application that is based on
  66. another framework, the detection logic has to be translated for use in that
  67. framework. Nevertheless, application frameworks are a necessity when it comes
  68. to fast, cross-platform development. A generic architecture design should aim
  69. to be compatible with existing frameworks, and provide a way to detect and
  70. extend gestures independent of the framework.
  71. Application frameworks are written in a specific programming language. To
  72. support multiple frameworks and programming languages, the architecture should
  73. be accessible for applications using a language-independent method of
  74. communication. This intention leads towards the concept of a dedicated gesture
  75. detection application that serves gestures to multiple applications at the same
  76. time.
  77. The scope of this thesis is limited to the detection of gestures on multi-touch
  78. surface devices. It presents a design for a generic gesture detection
  79. architecture for use in multi-touch based applications. A reference
  80. implementation of this design is used in some test case applications, whose
  81. goal is to test the effectiveness of the design and detect its shortcomings.
  82. \section{Structure of this document}
  83. % TODO: pas als thesis af is
  84. \chapter{Related work}
  85. \section{Gesture and Activity Recognition Toolkit}
  86. The Gesture and Activity Recognition Toolkit (GART) \cite{GART} is a
  87. toolkit for the development of gesture-based applications. The toolkit
  88. states that the best way to classify gestures is to use machine learning.
  89. The programmer trains a program to recognize using the machine learning
  90. library from the toolkit. The toolkit contains a callback mechanism that
  91. the programmer uses to execute custom code when a gesture is recognized.
  92. Though multi-touch input is not directly supported by the toolkit, the
  93. level of abstraction does allow for it to be implemented in the form of a
  94. ``touch'' sensor.
  95. The reason to use machine learning is the statement that gesture detection
  96. ``is likely to become increasingly complex and unmanageable'' when using a
  97. set of predefined rules to detect whether some sensor input can be seen as
  98. a specific gesture. This statement is not necessarily true. If the
  99. programmer is given a way to separate the detection of different types of
  100. gestures and flexibility in rule definitions, over-complexity can be
  101. avoided.
  102. \section{Gesture recognition implementation for Windows 7}
  103. The online article \cite{win7touch} presents a Windows 7 application,
  104. written in Microsofts .NET. The application shows detected gestures in a
  105. canvas. Gesture trackers keep track of stylus locations to detect specific
  106. gestures. The event types required to track a touch stylus are ``stylus
  107. down'', ``stylus move'' and ``stylus up'' events. A
  108. \texttt{GestureTrackerManager} object dispatches these events to gesture
  109. trackers. The application supports a limited number of pre-defined
  110. gestures.
  111. An important observation in this application is that different gestures are
  112. detected by different gesture trackers, thus separating gesture detection
  113. code into maintainable parts.
  114. \section{Analysis of related work}
  115. The simple Processing implementation of multi-touch events provides most of
  116. the functionality that can be found in existing multi-touch applications.
  117. In fact, many applications for mobile phones and tablets only use tap and
  118. scroll events. For this category of applications, using machine learning
  119. seems excessive. Though the representation of a gesture using a feature
  120. vector in a machine learning algorithm is a generic and formal way to
  121. define a gesture, a programmer-friendly architecture should also support
  122. simple, ``hard-coded'' detection code. A way to separate different pieces
  123. of gesture detection code, thus keeping a code library manageable and
  124. extendable, is to user different gesture trackers.
  125. \chapter{Design}
  126. \label{chapter:design}
  127. % Diagrams are defined in a separate file
  128. \input{data/diagrams}
  129. \section{Introduction}
  130. % TODO: rewrite intro?
  131. This chapter describes the realization of a design for the generic
  132. multi-touch gesture detection architecture. The chapter represents the
  133. architecture as a diagram of relations between different components.
  134. Sections \ref{sec:driver-support} to \ref{sec:multiple-applications} define
  135. requirements for the architecture, and extend the diagram with components
  136. that meet these requirements. Section \ref{sec:example} describes an
  137. example usage of the architecture in an application.
  138. The input of the architecture comes from a multi-touch device driver.
  139. The task of the architecture is to translate this input to multi-touch
  140. gestures that are used by an application, as illustrated in figure
  141. \ref{fig:basicdiagram}. In the course of this chapter, the diagram is
  142. extended with the different components of the architecture.
  143. \basicdiagram{A diagram showing the position of the architecture
  144. relative to the device driver and a multi-touch application. The input
  145. of the architecture is given by a touch device driver. This output is
  146. translated to complex interaction gestures and passed to the
  147. application that is using the architecture.}
  148. \section{Supporting multiple drivers}
  149. \label{sec:driver-support}
  150. The TUIO protocol \cite{TUIO} is an example of a driver that can be used by
  151. multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate
  152. low-level touch events (see appendix \ref{app:tuio} for more details).
  153. These messages are specific to the API of the TUIO protocol. Other drivers
  154. may use very different messages types. To support more than one driver in
  155. the architecture, there must be some translation from driver-specific
  156. messages to a common format for primitive touch events. After all, the
  157. gesture detection logic in a ``generic'' architecture should not be
  158. implemented based on driver-specific messages. The event types in this
  159. format should be chosen so that multiple drivers can trigger the same
  160. events. If each supported driver would add its own set of event types to
  161. the common format, it the purpose of being ``common'' would be defeated.
  162. A minimal expectation for a touch device driver is that it detects simple
  163. touch points, with a ``point'' being an object at an $(x, y)$ position on
  164. the touch surface. This yields a basic set of events: $\{point\_down,
  165. point\_move, point\_up\}$.
  166. The TUIO protocol supports fiducials\footnote{A fiducial is a pattern used
  167. by some touch devices to identify objects.}, which also have a rotational
  168. property. This results in a more extended set: $\{point\_down, point\_move,
  169. point\_up, object\_down, object\_move, object\_up,\\ object\_rotate\}$.
  170. Due to their generic nature, the use of these events is not limited to the
  171. TUIO protocol. Another driver that can keep apart rotated objects from
  172. simple touch points could also trigger them.
  173. The component that translates driver-specific messages to common events,
  174. will be called the \emph{event driver}. The event driver runs in a loop,
  175. receiving and analyzing driver messages. When a sequence of messages is
  176. analyzed as an event, the event driver delegates the event to other
  177. components in the architecture for translation to gestures. This
  178. communication flow is illustrated in figure \ref{fig:driverdiagram}.
  179. \driverdiagram
  180. Support for a touch driver can be added by adding an event driver
  181. implementation. The choice of event driver implementation that is used in an
  182. application is dependent on the driver support of the touch device being
  183. used.
  184. Because driver implementations have a common output format in the form of
  185. events, multiple event drivers can run at the same time (see figure
  186. \ref{fig:multipledrivers}).
  187. \multipledriversdiagram
  188. \section{Restricting events to a screen area}
  189. \label{sec:restricting-gestures}
  190. % TODO: in introduction: gestures zijn opgebouwd uit meerdere primitieven
  191. Touch input devices are unaware of the graphical input widgets rendered on
  192. screen and therefore generate events that simply identify the screen
  193. location at which an event takes place. In order to be able to direct a
  194. gesture to a particular widget on screen, an application programmer must
  195. restrict the occurrence of a gesture to the area of the screen covered by
  196. that widget. An important question is if the architecture should offer a
  197. solution to this problem, or leave it to the application developer to
  198. assign gestures to a widget.
  199. The latter case generates a problem when a gesture must be able to occur at
  200. different screen positions at the same time. Consider the example in figure
  201. \ref{fig:ex1}, where two squares must be able to be rotated independently
  202. at the same time. If the developer is left the task to assign a gesture to
  203. one of the squares, the event analysis component in figure
  204. \ref{fig:driverdiagram} receives all events that occur on the screen.
  205. Assuming that the rotation detection logic detects a single rotation
  206. gesture based on all of its input events, without detecting clusters of
  207. input events, only one rotation gesture can be triggered at the same time.
  208. When a user attempts to ``grab'' one rectangle with each hand, the events
  209. triggered by all fingers are combined to form a single rotation gesture
  210. instead of two separate gestures.
  211. \examplefigureone
  212. To overcome this problem, groups of events must be separated by the event
  213. analysis component before any detection logic is executed. An obvious
  214. solution for the given example is to incorporate this separation in the
  215. rotation detection logic itself, using a distance threshold that decides if
  216. an event should be added to an existing rotation gesture. Leaving the task
  217. of separating groups of events to detection logic leads to duplication of
  218. code. For instance, if the rotation gesture is replaced by a \emph{pinch}
  219. gesture that enlarges a rectangle, the detection logic that detects the
  220. pinch gesture would have to contain the same code that separates groups of
  221. events for different gestures. Also, a pinch gesture can be performed using
  222. fingers multiple hands as well, in which case the use of a simple distance
  223. threshold is insufficient. These examples show that gesture detection logic
  224. is hard to implement without knowledge about (the position of) the
  225. widget\footnote{``Widget'' is a name commonly used to identify an element
  226. of a graphical user interface (GUI).} that is receiving the gesture.
  227. Therefore, a better solution for the assignment of events to gesture
  228. detection is to make the gesture detection component aware of the locations
  229. of application widgets on the screen. To accomplish this, the architecture
  230. must contain a representation of the screen area covered by a widget. This
  231. leads to the concept of an \emph{area}, which represents an area on the
  232. touch surface in which events should be grouped before being delegated to a
  233. form of gesture detection. Examples of simple area implementations are
  234. rectangles and circles. However, area's could also be made to represent
  235. more complex shapes.
  236. An area groups events and assigns them to some piece of gesture detection
  237. logic. This possibly triggers a gesture, which must be handled by the
  238. client application. A common way to handle framework events in an
  239. application is a ``callback'' mechanism: the application developer binds a
  240. function to an event, that is called by the framework when the event
  241. occurs. Because of the familiarity of this concept with developers, the
  242. architecture uses a callback mechanism to handle gestures in an
  243. application. Since an area controls the grouping of events and thus the
  244. occurrence of gestures in an area, gesture handlers for a specific gesture
  245. type are bound to an area. Figure \ref{fig:areadiagram} shows the position
  246. of areas in the architecture.
  247. \areadiagram{Extension of the diagram from figure \ref{fig:driverdiagram},
  248. showing the position of areas in the architecture. An area delegate events
  249. to a gesture detection component that trigger gestures. The area then calls
  250. the handler that is bound to the gesture type by the application.}
  251. An area can be seen as an independent subset of a touch surface. Therefore,
  252. the parameters (coordinates) of events and gestures within an area should
  253. be relative to the area.
  254. Note that the boundaries of an area are only used to group events, not
  255. gestures. A gesture could occur outside the area that contains its
  256. originating events, as illustrated by the example in figure \ref{fig:ex2}.
  257. \examplefiguretwo
  258. A remark must be made about the use of areas to assign events the detection
  259. of some gesture. The concept of an ``area'' is based on the assumption that
  260. the set or originating events that form a particular gesture, can be
  261. determined based exclusively on the location of the events. This is a
  262. reasonable assumption for simple touch objects whose only parameter is a
  263. position, such as a pen or a human finger. However, more complex touch
  264. objects can have additional parameters, such as rotational orientation or
  265. color. An even more generic concept is the \emph{event filter}, which
  266. detects whether an event should be assigned to a particular piece of
  267. gesture detection based on all available parameters. This level of
  268. abstraction allows for constraints like ``Use all blue objects within a
  269. widget for rotation, and green objects for tapping.''. As mentioned in the
  270. introduction chapter [\ref{chapter:introduction}], the scope of this thesis
  271. is limited to multi-touch surface based devices, for which the \emph{area}
  272. concept suffices. Section \ref{sec:eventfilter} explores the possibility of
  273. areas to be replaced with event filters.
  274. \subsection{Area tree}
  275. \label{sec:tree}
  276. The most simple implementation of areas in the architecture is a list of
  277. areas. When the event driver delegates an event, it is delegated to gesture
  278. detection by each area that contains the event coordinates.
  279. If the architecture were to be used in combination with an application
  280. framework like GTK \cite{GTK}, each GTK widget that must receive gestures
  281. should have a mirroring area that synchronizes its position with that of
  282. the widget. Consider a panel with five buttons that all listen to a
  283. ``tap'' event. If the panel is moved as a result of movement of the
  284. application window, the position of each button has to be updated.
  285. This process is simplified by the arrangement of areas in a tree structure.
  286. A root area represents the panel, containing five subareas which are
  287. positioned relative to the root area. The relative positions do not need to
  288. be updated when the panel area changes its position. GUI frameworks, like
  289. GTK, use this kind of tree structure to manage widgets. A recommended first
  290. step when developing an application is to create some subclass of the area
  291. that synchronizes with the position of a widget from the GUI framework
  292. automatically.
  293. \section{Detecting gestures from events}
  294. \label{sec:gesture-detection}
  295. The events that are grouped by areas must be translated to complex gestures
  296. in some way. Gestures such as a button tap or the dragging of an object
  297. using one finger are easy to detect by comparing the positions of
  298. sequential $point\_down$ and $point\_move$ events.
  299. A way to detect more complex gestures is based on a sequence of input
  300. features is with the use of machine learning methods, such as Hidden Markov
  301. Models \footnote{A Hidden Markov Model (HMM) is a statistical model without
  302. a memory, it can be used to detect gestures based on the current input
  303. state alone.} \cite{conf/gw/RigollKE97}. A sequence of input states can be
  304. mapped to a feature vector that is recognized as a particular gesture with
  305. some probability. This type of gesture recognition is often used in video
  306. processing, where large sets of data have to be processed. Using an
  307. imperative programming style to recognize each possible sign in sign
  308. language detection is near impossible, and certainly not desirable.
  309. Sequences of events that are triggered by a multi-touch based surfaces are
  310. often of a manageable complexity. An imperative programming style is
  311. sufficient to detect many common gestures. The imperative programming style
  312. is also familiar and understandable for a wide range of application
  313. developers. Therefore, the aim is to use this programming style in the
  314. architecture implementation that is developed during this project.
  315. However, the architecture should not be limited to multi-touch surfaces
  316. alone. For example, the architecture should also be fit to be used in an
  317. application that detects hand gestures from video input.
  318. A problem with the imperative programming style is that the detection of
  319. different gestures requires different pieces of detection code. If this is
  320. not managed well, the detection logic is prone to become chaotic and
  321. over-complex.
  322. To manage complexity and support multiple methods of gesture detection, the
  323. architecture has adopted the tracker-based design as described by
  324. \cite{win7touch}. Different detection components are wrapped in separate
  325. gesture tracking units, or \emph{gesture trackers} The input of a gesture
  326. tracker is provided by an area in the form of events. When a gesture
  327. tracker detects a gesture, this gesture is triggered in the corresponding
  328. area. The area then calls the callbacks which are bound to the gesture
  329. type by the application. Figure \ref{fig:trackerdiagram} shows the position
  330. of gesture trackers in the architecture.
  331. \trackerdiagram{Extension of the diagram from figure
  332. \ref{fig:areadiagram}, showing the position of gesture trackers in the
  333. architecture.}
  334. The use of gesture trackers as small detection units provides extendability
  335. of the architecture. A developer can write a custom gesture tracker and
  336. register it in the architecture. The tracker can use any type of detection
  337. logic internally, as long as it translates events to gestures.
  338. An example of a possible gesture tracker implementation is a
  339. ``transformation tracker'' that detects rotation, scaling and translation
  340. gestures.
  341. \section{Reserving an event for a gesture}
  342. \label{sec:reserve-event}
  343. A problem occurs when areas overlap, as shown by figure
  344. \ref{fig:eventpropagation}. When the white square is rotated, the gray
  345. square should keep its current orientation. This means that events that are
  346. used for rotation of the white square, should not be used for rotation of
  347. the gray square. To achieve this, there must be some communication between
  348. the gesture trackers of the two squares. When an event in the white square
  349. is used for rotation, that event should not be used for rotation in the
  350. gray square. In other words, the event must be \emph{reserved} for the
  351. rotation gesture in the white square. In order to reserve an event, the
  352. event needs to be handled by the rotation tracker of the white before the
  353. rotation tracker of the grey square receives it. Otherwise, the gray square
  354. has already triggered a rotation gesture and it will be too late to reserve
  355. the event for rotation of the white square.
  356. When an object touches the touch surface, the event that is triggered
  357. should be delegated according to the order in which its corresponding areas
  358. are positioned over each other. The tree structure in which areas are
  359. arranged (see section \ref{sec:tree}), is an ideal tool to determine the
  360. order in which an event is delegated to different areas. Areas in the tree
  361. are positioned on top of their parent. An object touching the screen is
  362. essentially touching the deepest area in the tree that contains the
  363. triggered event. That area should be the first to delegate the event to its
  364. gesture trackers, and then move the event up in the tree to its ancestors.
  365. The movement of an event up in the area tree will be called \emph{event
  366. propagation}. To reserve an event for a particular gesture, a gesture
  367. tracker can stop its propagation. When propagation of an event is stopped,
  368. it will not be passed on the ancestor areas, thus reserving the event.
  369. The diagram in appendix \ref{app:eventpropagation} illustrates the use of
  370. event propagation, applied to the example of the white and gray squares.
  371. \section{Serving multiple applications}
  372. \label{sec:multiple-applications}
  373. The design of the architecture is essentially complete with the components
  374. specified in this chapter. However, one specification has not yet been
  375. discussed: the ability address the architecture using a method of
  376. communication independent of the application programming language.
  377. If an application must start the architecture instance in a thread within
  378. the application itself, the architecture is required to be compatible with
  379. the programming language used to write the application. To overcome the
  380. language barrier, an instance of the architecture would have to run in a
  381. separate process.
  382. A common and efficient way of communication between two separate processes
  383. is through the use of a network protocol. In this particular case, the
  384. architecture can run as a daemon\footnote{``daemon'' is a name Unix uses to
  385. indicate that a process runs as a background process.} process, listening
  386. to driver messages and triggering gestures in registered applications.
  387. An advantage of a daemon setup is that is can serve multiple applications
  388. at the same time. Alternatively, each application that uses gesture
  389. interaction would start its own instance of the architecture in a separate
  390. process, which would be less efficient.
  391. \section{Example usage}
  392. \label{sec:example}
  393. This section describes an extended example to illustrate the data flow of
  394. the architecture. The example application listens to tap events on a button
  395. within an application window. The window also contains a draggable circle.
  396. The application window can be resized using \emph{pinch} gestures. Figure
  397. \ref{fig:examplediagram} shows the architecture created by the pseudo code
  398. below.
  399. \begin{verbatim}
  400. initialize GUI framework, creating a window and nessecary GUI widgets
  401. create a root area that synchronizes position and size with the application window
  402. define 'rotation' gesture handler and bind it to the root area
  403. create an area with the position and radius of the circle
  404. define 'drag' gesture handler and bind it to the circle area
  405. create an area with the position and size of the button
  406. define 'tap' gesture handler and bind it to the button area
  407. create a new event server and assign the created root area to it
  408. start the event server in a new thread
  409. start the GUI main loop in the current thread
  410. \end{verbatim}
  411. \examplediagram
  412. \chapter{Test applications}
  413. \section{Reference implementation in Python}
  414. \label{sec:implementation}
  415. % TODO
  416. % alleen window.contains op point down, niet move/up
  417. % een paar simpele windows en trackers
  418. % Geen netwerk protocol
  419. To test multi-touch interaction properly, a multi-touch device is required. The
  420. University of Amsterdam (UvA) has provided access to a multi-touch table from
  421. PQlabs. The table uses the TUIO protocol \cite{TUIO} to communicate touch
  422. events. See appendix \ref{app:tuio} for details regarding the TUIO protocol.
  423. The reference implementation is a Proof of Concept that translates TUIO
  424. messages to some simple touch gestures (see appendix \ref{app:implementation}
  425. for details).
  426. % omdat we alleen deze tafel hebben kunnen we het concept van de event driver
  427. % alleen met het TUIO protocol testen, en niet vergelijken met andere drivers
  428. % TODO
  429. % testprogramma's met PyGame/Cairo
  430. \chapter{Suggestions for future work}
  431. % TODO
  432. % - network protocol (ZeroMQ) voor meerdere talen en simultane processen
  433. % - gebruik formelere definitie van gestures ipv expliciete detection logic,
  434. % bijv. een state machine
  435. % - volgende stap: maken van een library die meerdere drivers en complexe
  436. % gestures bevat
  437. % - "event filter" ipv "area"
  438. \section{A generic way for grouping events}
  439. \label{sec:eventfilter}
  440. \bibliographystyle{plain}
  441. \bibliography{report}{}
  442. \appendix
  443. \chapter{The TUIO protocol}
  444. \label{app:tuio}
  445. The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible
  446. objects, such as fingers or objects on a multi-touch table. Object information
  447. is sent to the TUIO UDP port (3333 by default).
  448. For efficiency reasons, the TUIO protocol is encoded using the Open Sound
  449. Control \cite[OSC]{OSC} format. An OSC server/client implementation is
  450. available for Python: pyOSC \cite{pyOSC}.
  451. A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}.
  452. However, the execution of an example script yields an error regarding Python's
  453. built-in \texttt{socket} library. Therefore, the reference implementation uses
  454. the pyOSC package to receive TUIO messages.
  455. The two most important message types of the protocol are ALIVE and SET
  456. messages. An ALIVE message contains the list of session id's that are currently
  457. ``active'', which in the case of multi-touch a table means that they are
  458. touching the screen. A SET message provides geometric information of a session
  459. id, such as position, velocity and acceleration.
  460. Each session id represents an object. The only type of objects on the
  461. multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y)
  462. position on the screen.
  463. ALIVE messages can be used to determine when an object touches and releases the
  464. screen. For example, if a session id was in the previous message but not in the
  465. current, The object it represents has been lifted from the screen.
  466. SET provide information about movement. In the case of simple (x, y) positions,
  467. only the movement vector of the position itself can be calculated. For more
  468. complex objects such as fiducials, arguments like rotational position and
  469. acceleration are also included.
  470. ALIVE and SET messages can be combined to create ``point down'', ``point move''
  471. and ``point up'' events (as used by the Windows 7 implementation
  472. \cite{win7touch}).
  473. TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left
  474. top corner of the screen and $(1.0, 1.0)$ the right bottom corner. To focus
  475. events within a window, a translation to window coordinates is required in the
  476. client application, as stated by the online specification
  477. \cite{TUIO_specification}:
  478. \begin{quote}
  479. In order to compute the X and Y coordinates for the 2D profiles a TUIO
  480. tracker implementation needs to divide these values by the actual sensor
  481. dimension, while a TUIO client implementation consequently can scale these
  482. values back to the actual screen dimension.
  483. \end{quote}
  484. \chapter{Experimental program}
  485. \label{app:experiment}
  486. % TODO: This is not really 'related', move it to somewhere else
  487. \section{Processing implementation of simple gestures in Android}
  488. An implementation of a detection architecture for some simple multi-touch
  489. gestures (tap, double tap, rotation, pinch and drag) using
  490. Processing\footnote{Processing is a Java-based development environment with
  491. an export possibility for Android. See also \url{http://processing.org/.}}
  492. can be found in a forum on the Processing website \cite{processingMT}. The
  493. implementation is fairly simple, but it yields some very appealing results.
  494. The detection logic of all gestures is combined in a single class. This
  495. does not allow for extendability, because the complexity of this class
  496. would increase to an undesirable level (as predicted by the GART article
  497. \cite{GART}). However, the detection logic itself is partially re-used in
  498. the reference implementation of the generic gesture detection architecture.
  499. % TODO: rewrite intro
  500. When designing a software library, its API should be understandable and easy to
  501. use for programmers. To find out the basic requirements of the API to be
  502. usable, an experimental program has been written based on the Processing code
  503. from \cite{processingMT}. The program receives TUIO events and translates them
  504. to point \emph{down}, \emph{move} and \emph{up} events. These events are then
  505. interpreted to be (double or single) \emph{tap}, \emph{rotation} or
  506. \emph{pinch} gestures. A simple drawing program then draws the current state to
  507. the screen using the PyGame library. The output of the program can be seen in
  508. figure \ref{fig:draw}.
  509. \begin{figure}[H]
  510. \center
  511. \label{fig:draw}
  512. \includegraphics[scale=0.4]{data/experimental_draw.png}
  513. \caption{Output of the experimental drawing program. It draws the touch
  514. points and their centroid on the screen (the centroid is used as center
  515. point for rotation and pinch detection). It also draws a green
  516. rectangle which responds to rotation and pinch events.}
  517. \end{figure}
  518. One of the first observations is the fact that TUIO's \texttt{SET} messages use
  519. the TUIO coordinate system, as described in appendix \ref{app:tuio}. The test
  520. program multiplies these with its own dimensions, thus showing the entire
  521. screen in its window. Also, the implementation only works using the TUIO
  522. protocol. Other drivers are not supported.
  523. Though using relatively simple math, the rotation and pinch events work
  524. surprisingly well. Both rotation and pinch use the centroid of all touch
  525. points. A \emph{rotation} gesture uses the difference in angle relative to the
  526. centroid of all touch points, and \emph{pinch} uses the difference in distance.
  527. Both values are normalized using division by the number of touch points. A
  528. pinch event contains a scale factor, and therefore uses a division of the
  529. current by the previous average distance to the centroid.
  530. There is a flaw in this implementation. Since the centroid is calculated using
  531. all current touch points, there cannot be two or more rotation or pinch
  532. gestures simultaneously. On a large multi-touch table, it is desirable to
  533. support interaction with multiple hands, or multiple persons, at the same time.
  534. This kind of application-specific requirements should be defined in the
  535. application itself, whereas the experimental implementation defines detection
  536. algorithms based on its test program.
  537. Also, the different detection algorithms are all implemented in the same file,
  538. making it complex to read or debug, and difficult to extend.
  539. \chapter{Diagram demonstrating event propagation}
  540. \label{app:eventpropagation}
  541. \eventpropagationfigure
  542. \end{document}