Commit 7a0b1b32 authored by Taddeüs Kroes's avatar Taddeüs Kroes

Worked on report.

parent 7b535751
......@@ -253,13 +253,13 @@
edge[linefrom, dotted, bend left=65] node[left] {4} (gray);
\end{tikzpicture}
}
\caption{Two nested squares both listen to rotation gestures. The two
figures both show a touch object triggering an event, which is
delegated through the event area tree in the order indicated by the numbered
arrow labels. Normal arrows represent events, dotted arrows represent
gestures. Note that the dotted arrows only represent the path a gesture
would travel in the tree \emph{if triggered}, not an actual triggered
gesture.}
\caption{
Two nested squares both listen to rotation gestures. The two
figures both show a touch object triggering an event, which is
delegated through the event area tree in the order indicated by the
numbered arrow labels. Dotted arrows represent a flow of gestures,
regular arrows represent events.
}
\label{fig:eventpropagation}
\end{figure}
}
......
......@@ -381,7 +381,8 @@ detection for every new gesture-based application.
area in the tree that contains the triggered event. That event area should
be the first to delegate the event to its gesture detection components, and
then propagate the event up in the tree to its ancestors. A gesture
detection component can stop the propagation of the event.
detection component can stop the propagation of the event by its
corresponding event area.
An additional type of event propagation is ``immediate propagation'', which
indicates propagation of an event from one gesture detection component to
......@@ -422,20 +423,6 @@ detection for every new gesture-based application.
complex gestures, like the writing of a character from the alphabet,
require more advanced detection algorithms.
A way to detect these complex gestures based on a sequence of input events,
is with the use of machine learning methods, such as the Hidden Markov
Models \footnote{A Hidden Markov Model (HMM) is a statistical model without
a memory, it can be used to detect gestures based on the current input
state alone.} used for sign language detection by
\cite{conf/gw/RigollKE97}. A sequence of input states can be mapped to a
feature vector that is recognized as a particular gesture with a certain
probability. An advantage of using machine learning with respect to an
imperative programming style is that complex gestures can be described
without the use of explicit detection logic. For example, the detection of
the character `A' being written on the screen is difficult to implement
using an imperative programming style, while a trained machine learning
system can produce a match with relative ease.
Sequences of events that are triggered by a multi-touch based surfaces are
often of a manageable complexity. An imperative programming style is
sufficient to detect many common gestures, like rotation and dragging. The
......@@ -447,25 +434,40 @@ detection for every new gesture-based application.
not managed well, the detection logic is prone to become chaotic and
over-complex.
A way to detect more complex gestures based on a sequence of input events,
is with the use of machine learning methods, such as the Hidden Markov
Models \footnote{A Hidden Markov Model (HMM) is a statistical model without
a memory, it can be used to detect gestures based on the current input
state alone.} used for sign language detection by Gerhard Rigoll et al.
\cite{conf/gw/RigollKE97}. A sequence of input states can be mapped to a
feature vector that is recognized as a particular gesture with a certain
probability. An advantage of using machine learning with respect to an
imperative programming style is that complex gestures can be described
without the use of explicit detection logic, thus reducing code complexity.
For example, the detection of the character `A' being written on the screen
is difficult to implement using an imperative programming style, while a
trained machine learning system can produce a match with relative ease.
To manage complexity and support multiple styles of gesture detection
logic, the architecture has adopted the tracker-based design as described
by \cite{win7touch}. Different detection components are wrapped in separate
gesture tracking units, or \emph{gesture trackers}. The input of a gesture
tracker is provided by an event area in the form of events. Each gesture
detection component is wrapped in a gesture tracker with a fixed type of
input and output. Internally, the gesture tracker can adopt any programming
style. A character recognition component can use an HMM, whereas a tap
detection component defines a simple function that compares event
coordinates.
by Manoj Kumar \cite{win7touch}. Different detection components are wrapped
in separate gesture tracking units called \emph{gesture trackers}. The
input of a gesture tracker is provided by an event area in the form of
events. Each gesture detection component is wrapped in a gesture tracker
with a fixed type of input and output. Internally, the gesture tracker can
adopt any programming style. A character recognition component can use an
HMM, whereas a tap detection component defines a simple function that
compares event coordinates.
When a gesture tracker detects a gesture, this gesture is triggered in the
corresponding event area. The event area then calls the callbacks which are
bound to the gesture type by the application.
The use of gesture trackers as small detection units provides extendability
The use of gesture trackers as small detection units allows extendability
of the architecture. A developer can write a custom gesture tracker and
register it in the architecture. The tracker can use any type of detection
logic internally, as long as it translates events to gestures.
logic internally, as long as it translates low-level events to high-level
gestures.
An example of a possible gesture tracker implementation is a
``transformation tracker'' that detects rotation, scaling and translation
......@@ -498,7 +500,10 @@ detection for every new gesture-based application.
An advantage of a daemon setup is that it can serve multiple applications
at the same time. Alternatively, each application that uses gesture
interaction would start its own instance of the architecture in a separate
process, which would be less efficient.
process, which would be less efficient. The network communication layer
also allows the architecture and a client application to run on separate
machines, thus distributing computational load. The other machine may even
use a different operating system.
\section{Example usage}
\label{sec:example}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment