Commit a318b0d4 authored by Taddeüs Kroes's avatar Taddeüs Kroes

Rewrote part of gesture trackers section.

parent f39c98e4
...@@ -145,7 +145,7 @@ ...@@ -145,7 +145,7 @@
\architecture{ \architecture{
\node[block, below of=driver] (eventdriver) {Event driver} \node[block, below of=driver] (eventdriver) {Event driver}
edge[linefrom] node[right, near end] {driver-specific messages} (driver); edge[linefrom] node[right, near end] {driver-specific messages} (driver);
\node[block, below of=eventdriver] (area) {Area tree} \node[block, below of=eventdriver] (area) {Event area tree}
edge[linefrom] node[right] {events} (eventdriver); edge[linefrom] node[right] {events} (eventdriver);
\node[block, right of=area, xshift=7em] (tracker) {Gesture trackers} \node[block, right of=area, xshift=7em] (tracker) {Gesture trackers}
edge[linefrom, bend right=10] node[above] {events} (area) edge[linefrom, bend right=10] node[above] {events} (area)
...@@ -155,8 +155,9 @@ ...@@ -155,8 +155,9 @@
\group{eventdriver}{eventdriver}{tracker}{area}{Architecture} \group{eventdriver}{eventdriver}{tracker}{area}{Architecture}
} }
\caption{Extension of the diagram from figure \ref{fig:areadiagram}, \caption{Extension of the diagram from figure \ref{fig:areadiagram}
showing the position of gesture trackers in the architecture.} with gesture trackers. Gesture trackers analyze detect high-level
gestures from low-level events.}
\label{fig:trackerdiagram} \label{fig:trackerdiagram}
\end{figure} \end{figure}
} }
......
...@@ -409,47 +409,47 @@ goal is to test the effectiveness of the design and detect its shortcomings. ...@@ -409,47 +409,47 @@ goal is to test the effectiveness of the design and detect its shortcomings.
\section{Detecting gestures from events} \section{Detecting gestures from events}
\label{sec:gesture-detection} \label{sec:gesture-detection}
The events that are grouped by areas must be translated to complex gestures The low-level events that are grouped by an event area must be translated
in some way. Gestures such as a button tap or the dragging of an object to high-level gestures in some way. Simple gestures, such as a tap or the
using one finger are easy to detect by comparing the positions of dragging of an element using one finger, are easy to detect by comparing
sequential $point\_down$ and $point\_move$ events. the positions of sequential $point\_down$ and $point\_move$ events. More
complex gestures, like the writing of a character from the alphabet,
A way to detect more complex gestures is based on a sequence of input require more advanced detection algorithms.
features is with the use of machine learning methods, such as Hidden Markov
Models \footnote{A Hidden Markov Model (HMM) is a statistical model without A way to detect complex gestures based on a sequence of input features
a memory, it can be used to detect gestures based on the current input is with the use of machine learning methods, such as Hidden Markov Models
state alone.} \cite{conf/gw/RigollKE97}. A sequence of input states can be \footnote{A Hidden Markov Model (HMM) is a statistical model without a
mapped to a feature vector that is recognized as a particular gesture with memory, it can be used to detect gestures based on the current input state
some probability. This type of gesture recognition is often used in video alone.} \cite{conf/gw/RigollKE97}. A sequence of input states can be mapped
processing, where large sets of data have to be processed. Using an to a feature vector that is recognized as a particular gesture with a
imperative programming style to recognize each possible sign in sign certain probability. An advantage of using machine learning with respect to
language detection is near impossible, and certainly not desirable. an imperative programming style is that complex gestures can be described
without the use of explicit detection logic. For example, the detection of
the character `A' being written on the screen is difficult to implement
using an imperative programming style, while a trained machine learning
system can produce a match with relative ease.
Sequences of events that are triggered by a multi-touch based surfaces are Sequences of events that are triggered by a multi-touch based surfaces are
often of a manageable complexity. An imperative programming style is often of a manageable complexity. An imperative programming style is
sufficient to detect many common gestures. The imperative programming style sufficient to detect many common gestures, like rotation and dragging. The
is also familiar and understandable for a wide range of application imperative programming style is also familiar and understandable for a wide
developers. Therefore, the aim is to use this programming style in the range of application developers. Therefore, the architecture should support
architecture implementation that is developed during this project. an imperative style of gesture detection.
However, the architecture should not be limited to multi-touch surfaces A problem with the imperative programming style is that the explicit
alone. For example, the architecture should also be fit to be used in an detection of different gestures requires different gesture detection
application that detects hand gestures from video input. components. If these components is not managed well, the detection logic is
prone to become chaotic and over-complex.
A problem with the imperative programming style is that the detection of
different gestures requires different pieces of detection code. If this is
not managed well, the detection logic is prone to become chaotic and
over-complex.
To manage complexity and support multiple methods of gesture detection, the To manage complexity and support multiple methods of gesture detection, the
architecture has adopted the tracker-based design as described by architecture has adopted the tracker-based design as described by
\cite{win7touch}. Different detection components are wrapped in separate \cite{win7touch}. Different detection components are wrapped in separate
gesture tracking units, or \emph{gesture trackers} The input of a gesture gesture tracking units, or \emph{gesture trackers}. The input of a gesture
tracker is provided by an area in the form of events. When a gesture tracker is provided by an event area in the form of events. When a gesture
tracker detects a gesture, this gesture is triggered in the corresponding tracker detects a gesture, this gesture is triggered in the corresponding
area. The area then calls the callbacks which are bound to the gesture event area. The event area then calls the callbacks which are bound to the
type by the application. Figure \ref{fig:trackerdiagram} shows the position gesture type by the application. Figure \ref{fig:trackerdiagram} shows the
of gesture trackers in the architecture. position of gesture trackers in the architecture.
\trackerdiagram \trackerdiagram
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment