Ver código fonte

Moved tracker implementation appendix to implementation chapter.

Taddeus Kroes 13 anos atrás
pai
commit
8709d18fc7
1 arquivos alterados com 138 adições e 139 exclusões
  1. 138 139
      docs/report.tex

+ 138 - 139
docs/report.tex

@@ -585,9 +585,88 @@ Python programs. The two test programs are also written in Python.
 
 The event area implementations contain some geometric functions to determine
 whether an event should be delegated to an event area. All gesture trackers
-have been implemented using an imperative programming style. Technical details
-about the implementation of gesture detection are described in appendix
-\ref{app:implementation-details}.
+have been implemented using an imperative programming style. Sections
+\ref{sec:basictracker} to \ref{sec:transformationtracker} describe the gesture
+tracker implementations in detail.
+
+\subsection{Basic tracker}
+\label{sec:basictracker}
+
+The ``basic tracker'' implementation exists only to provide access to low-level
+events in an application. Low-level events are only handled by gesture
+trackers, not by the application itself. Therefore, the basic tracker maps
+\emph{point\_\{down,move,up\}} events to equally named gestures that can be
+handled by the application.
+
+\subsection{Tap tracker}
+\label{sec:taptracker}
+
+The ``tap tracker'' detects three types of tap gestures:
+
+\begin{enumerate}
+    \item The basic \emph{tap} gesture is triggered when a touch point releases
+        the touch surface within a certain time and distance of its initial
+        position. When a \emph{point\_down} event is received, its location is
+        saved along with the current timestamp. On the next \emph{point\_up}
+        event of the touch point, the difference in time and position with its
+        saved values are compared with predefined thresholds to determine
+        whether a \emph{tap} gesture should be triggered.
+    \item A \emph{double tap} gesture consists of two sequential \emph{tap}
+        gestures that are located within a certain distance of each other, and
+        occur within a certain time window. When a \emph{tap} gesture is
+        triggered, the tracker saves it as the ``last tap'' along with the
+        current timestamp. When another \emph{tap} gesture is triggered, its
+        location and the current timestamp are compared with those of the
+        ``last tap'' gesture to determine whether a \emph{double tap} gesture
+        should be triggered. If so, the gesture is triggered at the location of
+        the ``last tap'', because the second tap may be less accurate.
+    \item A separate thread handles detection of \emph{single tap} gestures at
+        a rate of thirty times per second. When the time since the ``last tap''
+        exceeds the maximum time between two taps of a \emph{double tap}
+        gesture, a \emph{single tap} gesture is triggered.
+\end{enumerate}
+
+The \emph{single tap} gesture exists to be able to make a distinction between
+single and double tap gestures. This distinction is not possible with the
+regular \emph{tap} gesture, since the first \emph{tap} gesture has already been
+handled by the application when the second \emph{tap} of a \emph{double tap}
+gesture is triggered.
+
+\subsection{Transformation tracker}
+\label{sec:transformationtracker}
+
+The transformation tracker triggers \emph{rotate}, \emph{pinch}, \emph{drag}
+and \emph{flick} gestures. These gestures use the centroid of all touch points.
+A \emph{rotate} gesture uses the difference in angle relative to the centroid
+of all touch points, and \emph{pinch} uses the difference in distance. Both
+values are normalized using division by the number of touch points $N$. A
+\emph{pinch} gesture contains a scale factor, and therefore uses a division of
+the current by the previous average distance to the centroid. Any movement of
+the centroid is used for \emph{drag} gestures. When a dragged touch point is
+released, a \emph{flick} gesture is triggered in the direction of the
+\emph{drag} gesture.
+
+Figure \ref{fig:transformationtracker} shows an example situation in which a
+touch point is moved, triggering a \emph{pinch} gesture, a \emph{rotate}
+gesture and a \emph{drag} gesture.
+
+\transformationtracker
+
+The \emph{pinch} gesture in figure \ref{fig:pinchrotate} uses the ratio
+$d_2:d_1$ to calculate its $scale$ parameter. Note that the difference in
+distance $d_2 - d_1$ and the difference in angle $\alpha$ both relate to a
+single touch point. The \emph{pinch} and \emph{rotate} gestures that are
+triggered relate to all touch points, using the average of distances and
+angles. Since all except one of the touch points have not moved, their
+differences in distance and angle are zero. Thus, the averages can be
+calculated by dividing the differences in distance and angle of the moved touch
+point by the number of touch points $N$. The $scale$ parameter represents the
+scale relative to the previous situation, which results in the following
+formula:
+$$pinch.scale = \frac{d_1 + \frac{d_2 - d_1}{N}}{d_1}$$
+The angle used for the \emph{rotate} gesture is only divided by the number of
+touch points to obtain an average rotation of all touch points:
+$$rotate.angle = \frac{\alpha}{N}$$
 
 \section{Full screen Pygame application}
 
@@ -705,27 +784,12 @@ tapping on a polygon changes its color.
 An ``overlay'' event area is used to detect all fingers currently touching the
 screen. The application defines a custom gesture tracker, called the ``hand
 tracker'', which is used by the overlay. The hand tracker uses distances
-between detected fingers to detect which fingers belong to the same hand. The
-application draws a line from each finger to the hand it belongs to, as visible
-in figure \ref{fig:testapp}.
-
-Note that the overlay event area, though covering the entire screen surface, is
-not used as the root of the event area tree. Instead, the overlay is placed on
-top of the application window (being a rightmost sibling of the application
-window event area in the tree). This is necessary, because the transformation
-trackers in the application window stop the propagation of events. The hand
-tracker needs to capture all events to be able to give an accurate
-representations of all fingers touching the screen Therefore, the overlay
-should delegate events to the hand tracker before they are stopped by a
-transformation tracker.  Placing the overlay over the application window forces
-the screen event area to delegate events to the overlay event area first. The
-event area implementation delegates events to its children in right-to left
-order, because area's that are added to the tree later are assumed to be
-positioned over their previously added siblings.
-
+between detected fingers to detect which fingers belong to the same hand (see
+section \ref{sec:handtracker} for details). The application draws a line from
+each finger to the hand it belongs to, as visible in figure \ref{fig:testapp}.
 \begin{figure}[h!]
     \center
-    \includegraphics[scale=0.35]{data/testapp.png}
+    \includegraphics[scale=0.32]{data/testapp.png}
     \caption{
         Screenshot of the second test application. Two polygons can be dragged,
         rotated and scaled. Separate groups of fingers are recognized as hands,
@@ -744,7 +808,56 @@ shows the tree structure used by the application.
 
 \testappdiagram
 
-\section{Conclusions}
+Note that the overlay event area, though covering the entire screen surface, is
+not used as the root of the event area tree. Instead, the overlay is placed on
+top of the application window (being a rightmost sibling of the application
+window event area in the tree). This is necessary, because the transformation
+trackers in the application window stop the propagation of events. The hand
+tracker needs to capture all events to be able to give an accurate
+representations of all fingers touching the screen Therefore, the overlay
+should delegate events to the hand tracker before they are stopped by a
+transformation tracker.  Placing the overlay over the application window forces
+the screen event area to delegate events to the overlay event area first. The
+event area implementation delegates events to its children in right-to left
+order, because area's that are added to the tree later are assumed to be
+positioned over their previously added siblings.
+
+\subsection{Hand tracker}
+\label{sec:handtracker}
+
+The hand tracker sees each touch point as a finger. Based on a predefined
+distance threshold, each finger is assigned to a hand. Each hand consists of a
+list of finger locations, and the centroid of those locations.
+
+When a new finger is detected on the touch surface (a \emph{point\_down} event),
+the distance from that finger to all hand centroids is calculated. The hand to
+which the distance is the shortest can be the hand that the finger belongs to.
+If the distance is larger than the predefined distance threshold, the finger is
+assumed to be a new hand and \emph{hand\_down} gesture is triggered. Otherwise,
+the finger is assigned to the closest hand. In both cases, a
+\emph{finger\_down} gesture is triggered.
+
+Each touch point is assigned an ID by the reference implementation. When the
+hand tracker assigns a finger to a hand after a \emph{point\_down} event, its
+touch point ID is saved in a hash map\footnote{In computer science, a hash
+table or hash map is a data structure that uses a hash function to map
+identifying values, known as keys (e.g., a person's name), to their associated
+values (e.g., their telephone number). Source:
+\url{http://en.wikipedia.org/wiki/Hashmap}} with the \texttt{Hand} object. When
+a finger moves (a \emph{point\_move} event) or releases the touch surface
+(\emph{point\_up}), The corresponding hand is loaded from the hash map and
+triggers a \emph{finger\_move} or \emph{finger\_up} gesture. If a released
+finger is the last of a hand, that hand is removed with a \emph{hand\_up}
+gesture.
+
+\section{Results}
+\label{sec:results}
+
+% TODO: Evalueer of de implementatie en testapplicaties voldoen aan de
+% verwachtingen/eisen die je hebt gesteld in je ontwerp.
+
+\chapter{Conclusions}
+\label{chapter:conclusions}
 
 To support different devices, there must be an abstraction of device drivers so
 that gesture detection can be performed on a common set of low-level events.
@@ -772,6 +885,8 @@ A gesture trackers implementation is flexible, e.g. complex detection
 algorithms such as machine learning can be used simultaneously with other
 gesture trackers that use explicit detection.
 
+% TODO: terugkomen op resultaten uit testimplementatie
+
 \chapter{Suggestions for future work}
 \label{chapter:futurework}
 
@@ -928,120 +1043,4 @@ client application, as stated by the online specification
     values back to the actual screen dimension.
 \end{quote}
 
-\chapter{Gesture detection in the reference implementation}
-\label{app:implementation-details}
-
-The reference implementation contains three gesture tracker implementations,
-which are described in sections \ref{sec:basictracker} to
-\ref{sec:transformationtracker}. Section \ref{sec:handtracker} describes the
-custom ``hand tracker'' that is used by the test application from section
-\ref{sec:testapp}.
-
-\section{Basic tracker}
-\label{sec:basictracker}
-
-The ``basic tracker'' implementation exists only to provide access to low-level
-events in an application. Low-level events are only handled by gesture
-trackers, not by the application itself. Therefore, the basic tracker maps
-\emph{point\_\{down,move,up\}} events to equally named gestures that can be
-handled by the application.
-
-\section{Tap tracker}
-\label{sec:taptracker}
-
-The ``tap tracker'' detects three types of tap gestures:
-
-\begin{enumerate}
-    \item The basic \emph{tap} gesture is triggered when a touch point releases
-        the touch surface within a certain time and distance of its initial
-        position. When a \emph{point\_down} event is received, its location is
-        saved along with the current timestamp. On the next \emph{point\_up}
-        event of the touch point, the difference in time and position with its
-        saved values are compared with predefined thresholds to determine
-        whether a \emph{tap} gesture should be triggered.
-    \item A \emph{double tap} gesture consists of two sequential \emph{tap}
-        gestures that are located within a certain distance of each other, and
-        occur within a certain time window. When a \emph{tap} gesture is
-        triggered, the tracker saves it as the ``last tap'' along with the
-        current timestamp. When another \emph{tap} gesture is triggered, its
-        location and the current timestamp are compared with those of the
-        ``last tap'' gesture to determine whether a \emph{double tap} gesture
-        should be triggered. If so, the gesture is triggered at the location of
-        the ``last tap'', because the second tap may be less accurate.
-    \item A separate thread handles detection of \emph{single tap} gestures at
-        a rate of thirty times per second. When the time since the ``last tap''
-        exceeds the maximum time between two taps of a \emph{double tap}
-        gesture, a \emph{single tap} gesture is triggered.
-\end{enumerate}
-
-The \emph{single tap} gesture exists to be able to make a distinction between
-single and double tap gestures. This distinction is not possible with the
-regular \emph{tap} gesture, since the first \emph{tap} gesture has already been
-handled by the application when the second \emph{tap} of a \emph{double tap}
-gesture is triggered.
-
-\section{Transformation tracker}
-\label{sec:transformationtracker}
-
-The transformation tracker triggers \emph{rotate}, \emph{pinch}, \emph{drag}
-and \emph{flick} gestures. These gestures use the centroid of all touch points.
-A \emph{rotate} gesture uses the difference in angle relative to the centroid
-of all touch points, and \emph{pinch} uses the difference in distance. Both
-values are normalized using division by the number of touch points $N$. A
-\emph{pinch} gesture contains a scale factor, and therefore uses a division of
-the current by the previous average distance to the centroid. Any movement of
-the centroid is used for \emph{drag} gestures. When a dragged touch point is
-released, a \emph{flick} gesture is triggered in the direction of the
-\emph{drag} gesture.
-
-Figure \ref{fig:transformationtracker} shows an example situation in which a
-touch point is moved, triggering a \emph{pinch} gesture, a \emph{rotate}
-gesture and a \emph{drag} gesture.
-
-\transformationtracker
-
-The \emph{pinch} gesture in figure \ref{fig:pinchrotate} uses the ratio
-$d_2:d_1$ to calculate its $scale$ parameter. Note that the difference in
-distance $d_2 - d_1$ and the difference in angle $\alpha$ both relate to a
-single touch point. The \emph{pinch} and \emph{rotate} gestures that are
-triggered relate to all touch points, using the average of distances and
-angles. Since all except one of the touch points have not moved, their
-differences in distance and angle are zero. Thus, the averages can be
-calculated by dividing the differences in distance and angle of the moved touch
-point by the number of touch points $N$. The $scale$ parameter represents the
-scale relative to the previous situation, which results in the following
-formula:
-$$pinch.scale = \frac{d_1 + \frac{d_2 - d_1}{N}}{d_1}$$
-The angle used for the \emph{rotate} gesture is only divided by the number of
-touch points to obtain an average rotation of all touch points:
-$$rotate.angle = \frac{\alpha}{N}$$
-
-\section{Hand tracker}
-\label{sec:handtracker}
-
-The hand tracker sees each touch point as a finger. Based on a predefined
-distance threshold, each finger is assigned to a hand. Each hand consists of a
-list of finger locations, and the centroid of those locations.
-
-When a new finger is detected on the touch surface (a \emph{point\_down} event),
-the distance from that finger to all hand centroids is calculated. The hand to
-which the distance is the shortest can be the hand that the finger belongs to.
-If the distance is larger than the predefined distance threshold, the finger is
-assumed to be a new hand and \emph{hand\_down} gesture is triggered. Otherwise,
-the finger is assigned to the closest hand. In both cases, a
-\emph{finger\_down} gesture is triggered.
-
-Each touch point is assigned an ID by the reference implementation. When the
-hand tracker assigns a finger to a hand after a \emph{point\_down} event, its
-touch point ID is saved in a hash map\footnote{In computer science, a hash
-table or hash map is a data structure that uses a hash function to map
-identifying values, known as keys (e.g., a person's name), to their associated
-values (e.g., their telephone number). Source:
-\url{http://en.wikipedia.org/wiki/Hashmap}} with the \texttt{Hand} object. When
-a finger moves (a \emph{point\_move} event) or releases the touch surface
-(\emph{point\_up}), The corresponding hand is loaded from the hash map and
-triggers a \emph{finger\_move} or \emph{finger\_up} gesture. If a released
-finger is the last of a hand, that hand is removed with a \emph{hand\_up}
-gesture.
-
 \end{document}