Преглед изворни кода

Removed old experiments/design chapters.

Taddeus Kroes пре 13 година
родитељ
комит
04e6dbc897
1 измењених фајлова са 27 додато и 242 уклоњено
  1. 27 242
      docs/report.tex

+ 27 - 242
docs/report.tex

@@ -160,7 +160,7 @@ Python.
     extendable, is to user different gesture trackers.
     extendable, is to user different gesture trackers.
 
 
 % FIXME: change title below
 % FIXME: change title below
-\chapter{Design - new}
+\chapter{Design}
 
 
     % Diagrams are defined in a separate file
     % Diagrams are defined in a separate file
     \input{data/diagrams}
     \input{data/diagrams}
@@ -187,11 +187,11 @@ Python.
         driver.  For example, the table used in the experiments uses the TUIO
         driver.  For example, the table used in the experiments uses the TUIO
         protocol. The task of the architecture is to translate this input to
         protocol. The task of the architecture is to translate this input to
         multi-touch gestures that are used by an application, as illustrated in
         multi-touch gestures that are used by an application, as illustrated in
-        figure \ref{fig:basicdiagram}. At the end of this chapter, the diagram
-        is extended with the different components of the architecture.
+        figure \ref{fig:basicdiagram}. In the course of this chapter, the
+        diagram is extended with the different components of the architecture.
 
 
         \basicdiagram{A diagram showing the position of the architecture
         \basicdiagram{A diagram showing the position of the architecture
-        relative to a multi-touch application.}
+        relative to the device driver and a multi-touch application.}
 
 
     \section{Supporting multiple drivers}
     \section{Supporting multiple drivers}
 
 
@@ -201,12 +201,13 @@ Python.
     driver-specific messages to a common format in the arcitecture. Messages in
     driver-specific messages to a common format in the arcitecture. Messages in
     this common format will be called \emph{events}. Events can be translated
     this common format will be called \emph{events}. Events can be translated
     to multi-touch \emph{gestures}. The most basic set of events is
     to multi-touch \emph{gestures}. The most basic set of events is
-    ${point\_down, point\_move, point\_up}$.
+    $\{point\_down, point\_move, point\_up\}$. Here, a ``point'' is a touch
+    object with only an (x, y) position on the screen.
 
 
-    A more extended set could also contain more complex events. However, a
-    object can also have a rotational property, like the ``fiducials'' type in
-    the TUIO protocol. This results in $\{point\_down, point\_move, point\_up,
-    object\_down, object\_move, object\_up,\\object\_rotate\}$.
+    A more extended set could also contain more complex events. An object can
+    also have a rotational property, like the ``fiducials'' type in the TUIO
+    protocol. This results in $\{point\_down, point\_move,\\point\_up,
+    object\_down, object\_move, object\_up, object\_rotate\}$.
 
 
     The component that translates driver-specific messages to events, is called
     The component that translates driver-specific messages to events, is called
     the \emph{event driver}. The event driver runs in a loop, receiving and
     the \emph{event driver}. The event driver runs in a loop, receiving and
@@ -312,253 +313,37 @@ Python.
         \ref{fig:widgetdiagram}, showing the position of gesture trackers in
         \ref{fig:widgetdiagram}, showing the position of gesture trackers in
         the architecture.}
         the architecture.}
 
 
-    \section{Example usage}
-
-
-
-
-
-
-
-
-
-
-
+    \section{Nog iets hier met example diagrams...}
 
 
-
-
-
-
-
-
-
-% FIXME: Delete the 2 following chapters
-
-\chapter{Experiments}
-\label{chapter:requirements}
-
-% testimplementatie met taps, rotatie en pinch. Hieruit bleek:
-% - dat er verschillende manieren zijn om bijv. "rotatie" te
-%   detecteren, (en dat daartussen onderscheid moet kunnen worden
-%   gemaakt)
-% - dat detectie van verschillende soorten gestures moet kunnen
-%   worden gescheiden, anders wordt het een chaos.
-% - Er zijn een aantal keuzes gemaakt bij het ontwerpen van de gestures,
-%   bijv dat rotatie ALLE vingers gebruikt voor het centroid. Het is
-%   wellicht in een ander programma nodig om maar 1 hand te gebruiken, en
-%   dus punten dicht bij elkaar te kiezen (oplossing: windows).
-
-    \section{Introduction}
-
-    To test multi-touch interaction properly, a multi-touch device is required.
-    The University of Amsterdam (UvA) has provided access to a multi-touch
-    table from PQlabs. The table uses the TUIO protocol \cite{TUIO} to
-    communicate touch events. See appendix \ref{app:tuio} for details regarding
-    the TUIO protocol.
-
-    \section{Summary of observations}
-    \label{sec:observations}
-
-    \begin{itemize}
-        \item The TUIO protocol uses a distinctive coordinate system and set of
-            messages.
-        \item Touch events occur outside of the application window.
-        \item Gestures that use multiple touch points are using all touch
-            points (not a subset of them).
-        \item Code complexity increases when detection algorithms are added.
-        \item A multi-touch application can have very specific requirements for
-            gestures.
-    \end{itemize}
-
-    \section{Requirements}
-
-    From the observations in section \ref{sec:observations}, a number of
-    requirements can be specified for the design of the event mechanism:
-
-    \begin{itemize}
-        % vertalen driver-specifieke events naar algemeen formaat
-        \item To be able to support multiple input drivers, there must be a
-            translation from driver-specific messages to some common format
-            that can be used in gesture detection algorithms.
-        % events toewijzen aan GUI window (windows)
-        \item An application GUI window should be able to receive only events
-            occurring within that window, and not outside of it.
-        % scheiden groepen touchpoints voor verschillende gestures (windows)
-        \item To support multiple objects that are performing different
-            gestures at the same time, the architecture must be able to perform
-            gesture detection on a subset of the active touch points.
-        % scheiden van detectiecode voor verschillende gesture types
-        \item To avoid an increase in code complexity when adding new detection
-            algorithms, detection code of different gesture types must be
-            separated.
-        % extendability
-        \item The architecture should allow for extension with new detection
-            algorithms to be added to an implementation. This enables a
-            programmer to define custom gestures for an application.
-    \end{itemize}
-
-\chapter{Design}
-
-    \section{Components}
-
-        Based on the requirements from chapter \ref{chapter:requirements}, a design
-        for the architecture has been created. The design consists of a number
-        of components, each having a specific set of tasks.
-
-        % TODO: Rewrite components, use more diagrams
-
-        \subsection{Event server}
-
-        % vertaling driver naar point down, move, up
-        % vertaling naar schermpixelcoordinaten
-        % TUIO in reference implementation
-
-        The \emph{event server} is an abstraction for driver-specific server
-        implementations, such as a TUIO server. It receives driver-specific
-        messages and tanslates these to a common set of events and a common
-        coordinate system.
-
-        A minimal example of a common set of events is $\{point\_down,
-        point\_move, point\_up\}$. This is the set used by the reference
-        implementation. Respectively, these events represent an object being
-        placed on the screen, moving along the surface of the screen, and being
-        released from the screen.
-
-        A more extended set could also contain the same three events for an
-        object touching the screen. However, a object can also have a
-        rotational property, like the ``fiducials'' type in the TUIO protocol.
-        This results in $\{point\_down, point\_move, point\_up, object\_down,
-        object\_move, object\_up,\\object\_rotate\}$.
-        % TODO: is dit handig? point_down/object_down op 1 of andere manier samenvoegen?
-
-        An important note here, is that similar events triggered by different
-        event servers must have the same event type and parameters. In other
-        words, the output of the event servers should be determined by the
-        gesture servers (not the contrary).
-
-        The output of an event server implementation should also use a common
-        coordinate system, that is the coordinate system used by the gesture
-        server. For example, the reference implementation uses screen
-        coordinates in pixels, where (0, 0) is the upper left corner and
-        (\emph{screen width}, \emph{screen height}) the lower right corner of
-        the screen.
-
-        The abstract class definition of the event server should provide some
-        functionality to detect which driver-specific event server
-        implementation should be used.
-
-        \subsection{Gesture trackers}
-
-        Like \cite[the .NET implementation]{win7touch}, the architecture uses a
-        \emph{gesture tracker} to detect if a sequence of events forms a
-        particular gesture. A gesture tracker detects and triggers events for a
-        limited set of gesture types, given a set of touch points. If one group
-        of touch points is assigned to one tracker and another group to another
-        tracker, multiple gestures can be detected at the same time. For the
-        assignment of different groups of touch points to different gesture
-        trackers, the architecture uses so-called \emph{windows}. These are
-        described in the next section.
-
-        % event binding/triggering
-        A gesture tracker triggers a gesture event by executing a callback.
-        Callbacks are ``bound'' to a tracker by the application. Because
-        multiple gesture types can have very similar detection algorithm, a
-        tracker can detect multiple different types of gestures. For instance,
-        the rotation and pinch gestures from the experimental program in
-        section \ref{sec:experimental-draw} both use the centroid of all touch
-        points.
-
-        If no callback is bound for a particular gesture type, no detection of
-        that type is needed. A tracker implementation can use this knowledge
-        for code optimization.
-
-        % scheiding algoritmiek
-        A tracker implementation defines the gesture types it can trigger, and
-        the detection algorithms to trigger them. Consequently, detection
-        algorithms can be separated in different trackers. Different
-        trackers can be saved in different files, reducing the complexity of
-        the code in a single file. \\
-        % extendability
-        Because a tracker defines its own set of gesture types, the application
-        developer can define application-specific trackers (by extending a base
-        \texttt{GestureTracker} class, for example). In fact, any built-in
-        gesture trackers of an implementation are also created this way. This
-        allows for a plugin-like way of programming, which is very desirable if
-        someone would want to build a library of gesture trackers. Such a
-        library can easy be extended by others.
-
-        \subsection{Windows}
-
-        A \emph{window} represents a subset of the entire screen surface. The
-        goal of a window is to restrict the detection of certain gestures to
-        certain areas. A window contains a list of touch points, and a list of
-        trackers. A gesture server (defined in the next section) assigns touch
-        points to a window, but the window itself defines functionality to
-        check whether a touch point is inside the window. This way, new windows
-        can be defined to fit over any 2D object used by the application.
-
-        The first and most obvious use of a window is to restrict touch events
-        to a single application window. However, the use of windows can be used
-        in a lot more powerful way.
-
-        For example, an application contains an image with a transparent
-        background that can be dragged around. The user can only drag the image
-        by touching its foreground. To accomplish this, the application
-        programmer can define a window type that uses a bitmap to determine
-        whether a touch point is on the visible image surface. The tracker
-        which detects drag gestures is then bound to this window, limiting the
-        occurence of drag events to the image surface.
-
-        % toewijzen even aan deel v/h scherm:
-        % TUIO coördinaten zijn over het hele scherm en van 0.0 tot 1.0, dus
-        % moeten worden vertaald naar pixelcoördinaten binnen een ``window''
-        % TODO
-
-        \subsection{Gesture server}
-
-        % luistert naar point down, move, up
-        The \emph{gesture server} delegates events from the event server to the
-        set of windows that contain the touch points related to the events.
-
-        % toewijzing point (down) aan window(s)
-        The gesture server contains a list of windows. When the event server
-        triggers an event, the gesture server ``asks'' each window whether it
-        contains the related touch point. If so, the window updates its gesture
-        trackers, which can then trigger gestures.
-
-    \section{Diagrams}
+    % TODO
 
 
     \section{Example usage}
     \section{Example usage}
 
 
-    This section describes an example that illustrates the communication
-    between different components. The example application listens to tap events
-    in a GUI window.
+    This section describes an example that illustrates the API of the
+    architecture. The example application listens to tap events in a GUI
+    window.
 
 
     \begin{verbatim}
     \begin{verbatim}
-    # Create a gesture server that will be started later
-    server = new GestureServer object
-
     # Add a new window to the server, representing the GUI
     # Add a new window to the server, representing the GUI
-    window = new Window object
-    set window position and size to that of GUIO window
-    add window to server
+    widget = new rectangular Widget object
+    set widget position and size to that of the GUI window
+
+    # If the GUI toolkit allows it, bind window movement and resize handlers
+    # that alter the position size and sieze of the window object
+
+    # Create an event server that will be started later
+    server = new EventServer object
+    set widget as root widget for server
 
 
     # Define a handler that must be triggered when a tap gesture is detected
     # Define a handler that must be triggered when a tap gesture is detected
     begin function handler(gesture)
     begin function handler(gesture)
         # Do something
         # Do something
     end function
     end function
 
 
-    # Create a tracker that detects tap gestures
-    tracker = new TapTracker object  # Where TapTracker is an implementation of
-                                     # abstract Tracker
-    add tracker tot window
-    bind handler to tracker.tap
-
-    # If the GUI toolkit allows it, bind window movement and resize handlers
-    # that alter the position size and sieze of the window object
+    # Bind the handler to the 'tap' event (the widget creates a tap tracker)
+    bind ('tap', handler) to widget
 
 
-    # Start the gesture server (which in turn starts a driver-specific event
-    # server)
+    # Start event server (which in turn starts a driver-specific event server)
     start server
     start server
     \end{verbatim}
     \end{verbatim}