| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577 |
- \documentclass[twoside,openright]{uva-bachelor-thesis}
- \usepackage[english]{babel}
- \usepackage[utf8]{inputenc}
- \usepackage{hyperref,graphicx,float,tikz}
- % Link colors
- \hypersetup{colorlinks=true,linkcolor=black,urlcolor=blue,citecolor=DarkGreen}
- % Title Page
- \title{A generic architecture for gesture-based interaction}
- \author{Taddeüs Kroes}
- \supervisors{Dr. Robert G. Belleman (UvA)}
- \signedby{Dr. Robert G. Belleman (UvA)}
- \begin{document}
- % Title page
- \maketitle
- \begin{abstract}
- % TODO
- \end{abstract}
- % Set paragraph indentation
- \parindent 0pt
- \parskip 1.5ex plus 0.5ex minus 0.2ex
- % Table of content on separate page
- \tableofcontents
- \chapter{Introduction}
- Surface-touch devices have evolved from pen-based tablets to single-touch
- trackpads, to multi-touch devices like smartphones and tablets. Multi-touch
- devices enable a user to interact with software using hand gestures, making the
- interaction more expressive and intuitive. These gestures are more complex than
- primitive ``click'' or ``tap'' events that are used by single-touch devices.
- Some examples of more complex gestures are so-called ``pinch''\footnote{A
- ``pinch'' gesture is formed by performing a pinching movement with multiple
- fingers on a multi-touch surface. Pinch gestures are often used to zoom in or
- out on an object.} and ``flick''\footnote{A ``flick'' gesture is the act of
- grabbing an object and throwing it in a direction on a touch surface, giving
- it momentum to move for some time after the hand releases the surface.}
- gestures.
- The complexity of gestures is not limited to navigation in smartphones. Some
- multi-touch devices are already capable of recognizing objects touching the
- screen \cite[Microsoft Surface]{mssurface}. In the near future, touch screens
- will possibly be extended or even replaced with in-air interaction (Microsoft's
- Kinect \cite{kinect} and the Leap \cite{leap}).
- The interaction devices mentioned above generate primitive events. In the case
- of surface-touch devices, these are \emph{down}, \emph{move} and \emph{up}
- events. Application programmers who want to incorporate complex, intuitive
- gestures in their application face the challenge of interpreting these
- primitive events as gestures. With the increasing complexity of gestures, the
- complexity of the logic required to detect these gestures increases as well.
- This challenge limits, or even deters the application developer to use complex
- gestures in an application.
- The main question in this research project is whether a generic architecture
- for the detection of complex interaction gestures can be designed, with the
- capability of managing the complexity of gesture detection logic.
- Application frameworks for surface-touch devices, such as Nokia's Qt \cite{qt},
- include the detection of commonly used gestures like \emph{pinch} gestures.
- However, this detection logic is dependent on the application framework.
- Consequently, an application developer who wants to use multi-touch interaction
- in an application is forced to choose an application framework that includes
- support for multi-touch gestures. Therefore, a requirement of the generic
- architecture is that it must not be bound to a specific application framework.
- Moreover, the set of supported gestures is limited by the application framework
- of choice. To incorporate a custom event in an application, the application
- developer needs to extend the framework. This requires extensive knowledge of
- the framework's architecture. Also, if the same gesture is used in another
- application that is based on another framework, the detection logic has to be
- translated for use in that framework. Nevertheless, application frameworks are
- a necessity when it comes to fast, cross-platform development. Therefore, the
- architecture design should aim to be compatible with existing frameworks, but
- provide a way to detect and extend gestures independent of the framework.
- An application framework is written in a specific programming language. A
- generic architecture should not limited to a single programming language. The
- ultimate goal of this thesis is to provide support for complex gesture
- interaction in any application. Thus, applications should be able to address
- the architecture using a language-independent method of communication. This
- intention leads towards the concept of a dedicated gesture detection
- application that serves gestures to multiple programs at the same time.
- The scope of this thesis is limited to the detection of gestures on multi-touch
- surface devices. It presents a design for a generic gesture detection
- architecture for use in multi-touch based applications. A reference
- implementation of this design is used in some test case applications, whose
- goal is to test the effectiveness of the design and detect its shortcomings.
- % FIXME: Moet deze nog in de introductie?
- % How can the input of the architecture be normalized? This is needed, because
- % multi-touch drivers use their own specific message format.
- \section{Structure of this document}
- % TODO: pas als thesis af is
- \chapter{Related work}
- \section{Gesture and Activity Recognition Toolkit}
- The Gesture and Activity Recognition Toolkit (GART) \cite{GART} is a
- toolkit for the development of gesture-based applications. The toolkit
- states that the best way to classify gestures is to use machine learning.
- The programmer trains a program to recognize using the machine learning
- library from the toolkit. The toolkit contains a callback mechanism that
- the programmer uses to execute custom code when a gesture is recognized.
- Though multi-touch input is not directly supported by the toolkit, the
- level of abstraction does allow for it to be implemented in the form of a
- ``touch'' sensor.
- The reason to use machine learning is the statement that gesture detection
- ``is likely to become increasingly complex and unmanageable'' when using a
- set of predefined rules to detect whether some sensor input can be seen as
- a specific gesture. This statement is not necessarily true. If the
- programmer is given a way to separate the detection of different types of
- gestures and flexibility in rule definitions, over-complexity can be
- avoided.
- \section{Gesture recognition implementation for Windows 7}
- The online article \cite{win7touch} presents a Windows 7 application,
- written in Microsofts .NET. The application shows detected gestures in a
- canvas. Gesture trackers keep track of stylus locations to detect specific
- gestures. The event types required to track a touch stylus are ``stylus
- down'', ``stylus move'' and ``stylus up'' events. A
- \texttt{GestureTrackerManager} object dispatches these events to gesture
- trackers. The application supports a limited number of pre-defined
- gestures.
- An important observation in this application is that different gestures are
- detected by different gesture trackers, thus separating gesture detection
- code into maintainable parts. The architecture has adopted this design
- feature by also using different gesture trackers to track different gesture
- types.
- % TODO: This is not really 'related', move it to somewhere else
- \section{Processing implementation of simple gestures in Android}
- An implementation of a detection architecture for some simple multi-touch
- gestures (tap, double tap, rotation, pinch and drag) using
- Processing\footnote{Processing is a Java-based development environment with
- an export possibility for Android. See also \url{http://processing.org/.}}
- can be found in a forum on the Processing website \cite{processingMT}. The
- implementation is fairly simple, but it yields some very appealing results.
- The detection logic of all gestures is combined in a single class. This
- does not allow for extendability, because the complexity of this class
- would increase to an undesirable level (as predicted by the GART article
- \cite{GART}). However, the detection logic itself is partially re-used in
- the reference implementation of the generic gesture detection architecture.
- \section{Analysis of related work}
- The simple Processing implementation of multi-touch events provides most of
- the functionality that can be found in existing multi-touch applications.
- In fact, many applications for mobile phones and tablets only use tap and
- scroll events. For this category of applications, using machine learning
- seems excessive. Though the representation of a gesture using a feature
- vector in a machine learning algorithm is a generic and formal way to
- define a gesture, a programmer-friendly architecture should also support
- simple, ``hard-coded'' detection code. A way to separate different pieces
- of gesture detection code, thus keeping a code library manageable and
- extendable, is to user different gesture trackers.
- % FIXME: change title below
- \chapter{Design}
- \label{chapter:design}
- % Diagrams are defined in a separate file
- \input{data/diagrams}
- \section{Introduction}
- This chapter describes the realization of a design for the generic
- multi-touch gesture detection architecture. The chapter represents the
- architecture as a diagram of relations between different components.
- Sections \ref{sec:driver-support} to \ref{sec:event-analysis} define
- requirements for the architecture, and extend the diagram with components
- that meet these requirements. Section \ref{sec:example} describes an
- example usage of the architecture in an application.
- \subsection*{Position of architecture in software}
- The input of the architecture comes from a multi-touch device driver.
- The task of the architecture is to translate this input to multi-touch
- gestures that are used by an application, as illustrated in figure
- \ref{fig:basicdiagram}. In the course of this chapter, the diagram is
- extended with the different components of the architecture.
- \basicdiagram{A diagram showing the position of the architecture
- relative to the device driver and a multi-touch application. The input
- of the architecture is given by a touch device driver. This output is
- translated to complex interaction gestures and passed to the
- application that is using the architecture.}
- \section{Supporting multiple drivers}
- \label{sec:driver-support}
- The TUIO protocol \cite{TUIO} is an example of a touch driver that can be
- used by multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate
- low-level touch events (see appendix \ref{app:tuio} for more details).
- These messages are specific to the API of the TUIO protocol. Other touch
- drivers may use very different messages types. To support more than
- one driver in the architecture, there must be some translation from
- driver-specific messages to a common format for primitive touch events.
- After all, the gesture detection logic in a ``generic'' architecture should
- not be implemented based on driver-specific messages. The event types in
- this format should be chosen so that multiple drivers can trigger the same
- events. If each supported driver adds its own set of event types to the
- common format, it the purpose of being ``common'' would be defeated.
- A reasonable expectation for a touch device driver is that it detects
- simple touch points, with a ``point'' being an object at an $(x, y)$
- position on the touch surface. This yields a basic set of events:
- $\{point\_down, point\_move, point\_up\}$.
- The TUIO protocol supports fiducials\footnote{A fiducial is a pattern used
- by some touch devices to identify objects.}, which also have a rotational
- property. This results in a more extended set: $\{point\_down, point\_move,
- point\_up, object\_down, object\_move, object\_up,\\ object\_rotate\}$.
- Due to their generic nature, the use of these events is not limited to the
- TUIO protocol. Another driver that can keep apart rotated objects from
- simple touch points could also trigger them.
- The component that translates driver-specific messages to common events,
- will be called the \emph{event driver}. The event driver runs in a loop,
- receiving and analyzing driver messages. When a sequence of messages is
- analyzed as an event, the event driver delegates the event to other
- components in the architecture for translation to gestures. This
- communication flow is illustrated in figure \ref{fig:driverdiagram}.
- A touch device driver can be supported by adding an event driver
- implementation for it. The event driver implementation that is used in an
- application is dependent of the support of the touch device.
- \driverdiagram{Extension of the diagram from figure \ref{fig:basicdiagram},
- showing the position of the event driver in the architecture. The event
- driver translates driver-specific to a common set of events, which are
- delegated to analysis components that will interpret them as more complex
- gestures.}
- \section{Restricting gestures to a screen area}
- % TODO: in introduction: gestures zijn opgebouwd uit meerdere primitieven
- Touch input devices are unaware of the graphical input widgets rendered on
- screen and therefore generate events that simply identify the screen
- location at which an event takes place. In order to be able to direct a
- gesture to a particular widget on screen, an application programmer must
- restrict the occurrence of a gesture to the area of the screen covered by
- that widget. An important question is if the architecture should offer a
- solution to this problem, or leave it to the programmer to assign gestures
- to a widget.
- % TODO: eerst: aan developer overlaten, verwijzen naar vorige diagram dan:
- % consider the following example: ... twee vierkantjes die allebei naar
- % rotatie luisteren (figuur ter illustratie): als je ze tegelijk roteert
- % treedt er maar één globaal event op. Dus: niet gestures beperken tot een
- % area, maar events. dan kun je op elk vierkant een aparte detection logic
- % zetten met als input de events op die locatie oftewel: je kan het niet
- % aan de developer overlaten omdat de input van de detection logic moet
- % veranderen (heeft developer geen invloed op) dus conclusie: Je moet
- % events kunnen beperken tot een "area" van het scherm. op dit moment kan
- % de diagram dus al worden uitgebreid
- % dan: simpelste aanpak is een lijst van area's, als event erin past dan
- % delegeren. probleem (aangeven met voorbeeld van geneste widgets die
- % allebei naar tap luisteren): als area's overlappen wil je bepaalde events
- % reserveren voor bepaalde stukjes detection logic
- % oplossing: area'a opslaan in boomstructuur en event propagatie gebruiken
- % -> area binnenin een parent area kan events propageren naar die parent,
- % detection logic kan propagatie tegenhouden. om omhoog in de boom te
- % propageren moet het event eerst bij de leaf aankomen, dus eerst delegatie
- % tot laagste leaf node die het event bevat.
- % speciaal geval: overlappende area's in dezelfde laag v/d boom. in dat
- % geval: area die later is toegevoegd (rechter sibling) wordt aangenomen
- % bovenop de sibling links ervan te liggen en krijgt dus eerst het event.
- % Als propagatie in bovenste (rechter) area wordt gestopt, krijgt de
- % achterste (linker) sibling deze ook niet meer
- % bijkomend voordeel van boomstructuur: makkelijk te integreren in bijv GTK
- % die voor widgets een boomstructuur gebruikt -> voor elke widget die touch
- % events heeft een area aanmaken
- Gestures are composed of primitive events using detection logic. If a
- particular gesture should only occur within some area of the screen, it
- should be composed of only events that occur within that area Events that
- occur outside the area are not likely to be relevant to the . In other
- words, the gesture detection logic is affected by the area in which the
- gestures should be detected. Since the detection logic is part of the
- architecture, the architecture must be able to restrict the set of events
- to that are delegated to the particular piece of detection logic for the
- gesture being detected in the area.
- For example, a button tap\footnote{A ``tap'' gesture is triggered when a
- touch object releases a touch surface within a certain time and distance
- from the point where it initially touched the surface.} should only occur
- on the button itself, and not in any other area of the screen. A solution
- to this problem is the use of \emph{widgets}. The button from the example
- can be represented as a rectangular widget with a position and size. The
- position and size are compared with event coordinates to determine whether
- an event should occur within the button.
- \subsection*{Callbacks}
- \label{sec:callbacks}
- When an event is propagated by a widget, it is first used for event
- analysis on that widget. The event analysis can then trigger a gesture
- in the widget, which has to be handled by the application. To handle a
- gesture, the widget should provide a callback mechanism: the
- application binds a handler for a specific type of gesture to a widget.
- When a gesture of that type is triggered after event analysis, the
- widget triggers the callback.
- \subsection*{Widget tree}
- A problem occurs when widgets overlap. If a button in placed over a
- container and an event occurs occurs inside the button, should the
- button handle the event first? And, should the container receive the
- event at all or should it be reserved for the button?.
- The solution to this problem is to save widgets in a tree structure.
- There is one root widget, whose size is limited by the size of the
- touch screen. Being the leaf widget, and thus the widget that is
- actually touched when an object touches the device, the button widget
- should receive an event before its container does. However, events
- occur on a screen-wide level and thus at the root level of the widget
- tree. Therefore, an event is delegated in the tree before any analysis
- is performed. Delegation stops at the ``lowest'' widget in the three
- containing the event coordinates. That widget then performs some
- analysis of the event, after which the event is released back to the
- parent widget for analysis. This release of an event to a parent widget
- is called \emph{propagation}. To be able to reserve an event to some
- widget or analysis, the propagation of an event can be stopped during
- analysis.
- % TODO: inspired by JavaScript DOM
- Many GUI frameworks, like GTK \cite{GTK}, also use a tree structure to
- manage their widgets. This makes it easy to connect the architecture to
- such a framework. For example, the programmer can define a
- \texttt{GtkTouchWidget} that synchronises the position of a touch
- widget with that of a GTK widget, using GTK signals.
- \subsection*{Position of widget tree in architecture}
- \widgetdiagram{Extension of the diagram from figure
- \ref{fig:driverdiagram}, showing the position of widgets in the
- architecture.}
- \section{Event analysis}
- \label{sec:event-analysis}
- % TODO: essentie moet zijn dat gesture trackers detection logic opdelen in
- % behapbare stukken, en worden toegewezen aan een enkele area waardoor er
- % meerdere trackers tegelijk kunnen draaien op verschillende delen v/h
- % scherm. een formele definitie van gestures zou wellicht beter zijn, maar
- % wordt niet gegeven in deze thesis (wel besproken in future work)
- The events that are delegated to widgets must be analyzed in some way to
- gestures. This analysis is specific to the type of gesture being detected.
- E.g. the detection of a ``tap'' gesture is very different from detection of
- a ``rotate'' gesture. The implementation described in \cite{win7touch}
- separates the detection of different gestures into different \emph{gesture
- trackers}. This keeps the different pieces of detection code managable and
- extandable. Therefore, the architecture also uses gesture trackers to
- separate the analysis of events. A single gesture tracker detects a
- specific set of gesture types, given a sequence of events. An example of a
- possible gesture tracker implementation is a ``transformation tracker''
- that detects rotation, scaling and translation gestures.
- \subsection*{Assignment of a gesture tracker to a widget}
- As explained in section \ref{sec:callbacks}, events are delegated from
- a widget to some event analysis. The analysis component of a widget
- consists of a list of gesture trackers, each tracking a specific set of
- gestures. No two trackers in the list should be tracking the same
- gesture type.
- When a handler for a gesture is ``bound'' to a widget, the widget
- asserts that it has a tracker that is tracking this gesture. Thus, the
- programmer does not create gesture trackers manually. Figure
- \ref{fig:trackerdiagram} shows the position of gesture trackers in the
- architecture.
- \trackerdiagram{Extension of the diagram from figure
- \ref{fig:widgetdiagram}, showing the position of gesture trackers in
- the architecture.}
- \section{Serving multiple applications}
- % TODO
- \section{Example usage}
- \label{sec:example}
- This section describes an example that illustrates the API of the
- architecture. The example application listens to tap events on a button.
- The button is located inside an application window, which can be resized
- using pinch gestures.
- % TODO: comments weg, in pseudocode opschrijven
- \begin{verbatim}
- initialize GUI, creating a window
- # Add widgets representing the application window and button
- rootwidget = new rectangular Widget object
- set rootwidget position and size to that of the application window
- buttonwidget = new rectangular Widget object
- set buttonwidget position and size to that of the GUI button
- # Create an event server that will be started later
- server = new EventServer object
- set rootwidget as root widget for server
- # Define handlers and bind them to corresponding widgets
- begin function resize_handler(gesture)
- resize GUI window
- update position and size of root wigdet
- end function
- begin function tap_handler_handler(gesture)
- # Perform some action that the button is meant to do
- end function
- bind ('pinch', resize_handler) to rootwidget
- bind ('tap', tap_handler) to buttonwidget
- # Start event server (which in turn starts a driver-specific event server)
- start server
- \end{verbatim}
- \examplediagram{Diagram representation of the example above. Dotted arrows
- represent gestures, normal arrows represent events (unless labeled
- otherwise).}
- \chapter{Test applications}
- To test multi-touch interaction properly, a multi-touch device is required. The
- University of Amsterdam (UvA) has provided access to a multi-touch table from
- PQlabs. The table uses the TUIO protocol \cite{TUIO} to communicate touch
- events. See appendix \ref{app:tuio} for details regarding the TUIO protocol.
- The reference implementation is a Proof of Concept that translates TUIO
- messages to some simple touch gestures (see appendix \ref{app:implementation}
- for details).
- % TODO
- % testprogramma's met PyGame/Cairo
- \chapter{Suggestions for future work}
- % TODO
- % - network protocol (ZeroMQ) voor meerdere talen en simultane processen
- % - gebruik formelere definitie van gestures ipv expliciete detection logic,
- % bijv. een state machine
- % - volgende stap: maken van een library die meerdere drivers en complexe
- % gestures bevat
- \bibliographystyle{plain}
- \bibliography{report}{}
- \appendix
- \chapter{The TUIO protocol}
- \label{app:tuio}
- The TUIO protocol \cite{TUIO} defines a way to geometrically describe tangible
- objects, such as fingers or objects on a multi-touch table. Object information
- is sent to the TUIO UDP port (3333 by default).
- For efficiency reasons, the TUIO protocol is encoded using the Open Sound
- Control \cite[OSC]{OSC} format. An OSC server/client implementation is
- available for Python: pyOSC \cite{pyOSC}.
- A Python implementation of the TUIO protocol also exists: pyTUIO \cite{pyTUIO}.
- However, the execution of an example script yields an error regarding Python's
- built-in \texttt{socket} library. Therefore, the reference implementation uses
- the pyOSC package to receive TUIO messages.
- The two most important message types of the protocol are ALIVE and SET
- messages. An ALIVE message contains the list of session id's that are currently
- ``active'', which in the case of multi-touch a table means that they are
- touching the screen. A SET message provides geometric information of a session
- id, such as position, velocity and acceleration.
- Each session id represents an object. The only type of objects on the
- multi-touch table are what the TUIO protocol calls ``2DCur'', which is a (x, y)
- position on the screen.
- ALIVE messages can be used to determine when an object touches and releases the
- screen. For example, if a session id was in the previous message but not in the
- current, The object it represents has been lifted from the screen.
- SET provide information about movement. In the case of simple (x, y) positions,
- only the movement vector of the position itself can be calculated. For more
- complex objects such as fiducials, arguments like rotational position and
- acceleration are also included.
- ALIVE and SET messages can be combined to create ``point down'', ``point move''
- and ``point up'' events (as used by the Windows 7 implementation
- \cite{win7touch}).
- TUIO coordinates range from $0.0$ to $1.0$, with $(0.0, 0.0)$ being the left
- top corner of the screen and $(1.0, 1.0)$ the right bottom corner. To focus
- events within a window, a translation to window coordinates is required in the
- client application, as stated by the online specification
- \cite{TUIO_specification}:
- \begin{quote}
- In order to compute the X and Y coordinates for the 2D profiles a TUIO
- tracker implementation needs to divide these values by the actual sensor
- dimension, while a TUIO client implementation consequently can scale these
- values back to the actual screen dimension.
- \end{quote}
- \chapter{Experimental program}
- \label{app:experiment}
- % TODO: rewrite intro
- When designing a software library, its API should be understandable and easy to
- use for programmers. To find out the basic requirements of the API to be
- usable, an experimental program has been written based on the Processing code
- from \cite{processingMT}. The program receives TUIO events and translates them
- to point \emph{down}, \emph{move} and \emph{up} events. These events are then
- interpreted to be (double or single) \emph{tap}, \emph{rotation} or
- \emph{pinch} gestures. A simple drawing program then draws the current state to
- the screen using the PyGame library. The output of the program can be seen in
- figure \ref{fig:draw}.
- \begin{figure}[H]
- \center
- \label{fig:draw}
- \includegraphics[scale=0.4]{data/experimental_draw.png}
- \caption{Output of the experimental drawing program. It draws the touch
- points and their centroid on the screen (the centroid is used as center
- point for rotation and pinch detection). It also draws a green
- rectangle which responds to rotation and pinch events.}
- \end{figure}
- One of the first observations is the fact that TUIO's \texttt{SET} messages use
- the TUIO coordinate system, as described in appendix \ref{app:tuio}. The test
- program multiplies these with its own dimensions, thus showing the entire
- screen in its window. Also, the implementation only works using the TUIO
- protocol. Other drivers are not supported.
- Though using relatively simple math, the rotation and pinch events work
- surprisingly well. Both rotation and pinch use the centroid of all touch
- points. A \emph{rotation} gesture uses the difference in angle relative to the
- centroid of all touch points, and \emph{pinch} uses the difference in distance.
- Both values are normalized using division by the number of touch points. A
- pinch event contains a scale factor, and therefore uses a division of the
- current by the previous average distance to the centroid.
- There is a flaw in this implementation. Since the centroid is calculated using
- all current touch points, there cannot be two or more rotation or pinch
- gestures simultaneously. On a large multi-touch table, it is desirable to
- support interaction with multiple hands, or multiple persons, at the same time.
- This kind of application-specific requirements should be defined in the
- application itself, whereas the experimental implementation defines detection
- algorithms based on its test program.
- Also, the different detection algorithms are all implemented in the same file,
- making it complex to read or debug, and difficult to extend.
- \chapter{Reference implementation in Python}
- \label{app:implementation}
- % TODO
- % alleen window.contains op point down, niet move/up
- % een paar simpele windows en trackers
- \end{document}
|