Commit 3b799eb0 authored by Taddeüs Kroes's avatar Taddeüs Kroes

Addressed some trivial feedback comments.

parent 5283801d
......@@ -31,6 +31,7 @@
process would be capable of serving gestures to multiple applications at
the same time. A reference implementation and two test case applications
have been created to test the effectiveness of the architecture design.
% TODO: conclusions
\end{abstract}
% Set paragraph indentation
......@@ -230,21 +231,21 @@ detection for every new gesture-based application.
TUIO protocol. Another driver that can keep apart rotated objects from
simple touch points could also trigger them.
The component that translates device-specific messages to common events,
will be called the \emph{event driver}. The event driver runs in a loop,
receiving and analyzing driver messages. When a sequence of messages is
analyzed as an event, the event driver delegates the event to other
components in the architecture for translation to gestures.
The component that translates device-specific messages to common events, is
called the \emph{event driver}. The event driver runs in a loop, receiving
and analyzing driver messages. When a sequence of messages is analyzed as
an event, the event driver delegates the event to other components in the
architecture for translation to gestures.
Support for a touch driver can be added by adding an event driver
implementation. The choice of event driver implementation that is used in an
application is dependent on the driver support of the touch device being
used.
Because driver implementations have a common output format in the form of
events, multiple event drivers can be used at the same time (see figure
\ref{fig:multipledrivers}). This design feature allows low-level events
from multiple devices to be aggregated into high-level gestures.
Because event driver implementations have a common output format in the
form of events, multiple event drivers can be used at the same time (see
figure \ref{fig:multipledrivers}). This design feature allows low-level
events from multiple devices to be aggregated into high-level gestures.
\multipledriversdiagram
......@@ -261,11 +262,12 @@ detection for every new gesture-based application.
What's more, a widget within the application window itself should be able
to respond to different gestures. E.g. a button widget may respond to a
``tap'' gesture to be activated, whereas the application window responds to
a ``pinch'' gesture to be resized. In order to be able to direct a gesture
to a particular widget in an application, a gesture must be restricted to
the area of the screen covered by that widget. An important question is if
the architecture should offer a solution to this problem, or leave the task
of assigning gestures to application widgets to the application developer.
a ``pinch'' gesture to be resized. In order to restrict the occurence of a
gesture to a particular widget in an application, the events used for the
gesture must be restricted to the area of the screen covered by that
widget. An important question is if the architecture should offer a
solution to this problem, or leave the task of assigning gestures to
application widgets to the application developer.
If the architecture does not provide a solution, the ``gesture detection''
component in figure \ref{fig:fulldiagram} receives all events that occur on
......@@ -275,20 +277,19 @@ detection for every new gesture-based application.
illustrated in figure \ref{fig:ex1}, where two widgets on the screen can be
rotated independently. The rotation detection component that detects
rotation gestures receives all four fingers as input. If the two groups of
finger events are not separated by cluster detection, only one rotation
event will occur.
finger events are not separated by clustering them based on the area in
which they are placed, only one rotation event will occur.
\examplefigureone
A gesture detection component could perform a heuristic way of cluster
detection based on the distance between events. However, this method cannot
guarantee that a cluster of events corresponds with a particular
application widget. In short, a gesture detection component is difficult to
implement without awareness of the location of application widgets.
Secondly, the application developer still needs to direct gestures to a
particular widget manually. This requires geometric calculations in the
application logic, which is a tedious and error-prone task for the
developer.
A gesture detection component could perform a heuristic way of clustering
based on the distance between events. However, this method cannot guarantee
that a cluster of events corresponds with a particular application widget.
In short, a gesture detection component is difficult to implement without
awareness of the location of application widgets. Secondly, the
application developer still needs to direct gestures to a particular widget
manually. This requires geometric calculations in the application logic,
which is a tedious and error-prone task for the developer.
The architecture described here groups events that occur inside the area
covered by a widget, before passing them on to a gesture detection
......@@ -316,15 +317,15 @@ detection for every new gesture-based application.
function to an event, that is called when the event occurs. Because of the
familiarity of this concept with developers, the architecture uses a
callback mechanism to handle gestures in an application. Callback handlers
are bound to event areas, since events areas controls the grouping of
events and thus the occurrence of gestures in an area of the screen.
are bound to event areas, since event areas control the grouping of events
and thus the occurrence of gestures in an area of the screen.
\subsection{Area tree}
\label{sec:tree}
A basic usage of event areas in the architecture would be a list of event
areas. When the event driver delegates an event, it is accepted by each
event area that contains the event coordinates.
A basic data structure of event areas in the architecture would be a list
of event areas. When the event driver delegates an event, it is accepted by
each event area that contains the event coordinates.
If the architecture were to be used in combination with an application
framework, each widget that responds to gestures should have a mirroring
......@@ -394,17 +395,17 @@ detection for every new gesture-based application.
gesture detection component, nor to the ancestors of the event area.
The concept of an event area is based on the assumption that the set of
originating events that form a particular gesture, can be determined based
exclusively on the location of the events. This is a reasonable assumption
for simple touch objects whose only parameter is a position, such as a pen
or a human finger. However, more complex touch objects can have additional
parameters, such as rotational orientation or color. An even more generic
concept is the \emph{event filter}, which detects whether an event should
be assigned to a particular gesture detection component based on all
available parameters. This level of abstraction provides additional methods
of interaction. For example, a camera-based multi-touch surface could make
a distinction between gestures performed with a blue gloved hand, and
gestures performed with a green gloved hand.
originating events that form a particular gesture, can be determined
exclusively based on the location of the events. This is a reasonable
assumption for simple touch objects whose only parameter is a position,
such as a pen or a human finger. However, more complex touch objects can
have additional parameters, such as rotational orientation or color. An
even more generic concept is the \emph{event filter}, which detects whether
an event should be assigned to a particular gesture detection component
based on all available parameters. This level of abstraction provides
additional methods of interaction. For example, a camera-based multi-touch
surface could make a distinction between gestures performed with a blue
gloved hand, and gestures performed with a green gloved hand.
As mentioned in the introduction chapter [\ref{chapter:introduction}], the
scope of this thesis is limited to multi-touch surface based devices, for
......@@ -708,16 +709,19 @@ between detected fingers to detect which fingers belong to the same hand. The
application draws a line from each finger to the hand it belongs to, as visible
in figure \ref{fig:testapp}.
Note that however it is a full screen event area, the overlay is not used as
the root of the event area tree. Instead, the overlay is the right sibling of
the application window area in the tree. This is needed, because the
application window and its children stop the propagation of events to the root
event area. The overlay area needs all events to keep its hand tracker
up-to-date. Therefore, the tree is arranged in such a way that the overlay
event area handles an event first, before the application window can stop its
propagation. The event area implementation delegates events to its children in
right-to left order, because area's that are added to the tree later are
assumed to be positioned over their previously added siblings.
Note that the overlay event area, though covering the entire screen surface, is
not used as the root of the event area tree. Instead, the overlay is placed on
top of the application window (being a rightmost sibling of the application
window event area in the tree). This is necessary, because the transformation
trackers in the application window stop the propagation of events. The hand
tracker needs to capture all events to be able to give an accurate
representations of all fingers touching the screen Therefore, the overlay
should delegate events to the hand tracker before they are stopped by a
transformation tracker. Placing the overlay over the application window forces
the screen event area to delegate events to the overlay event area first. The
event area implementation delegates events to its children in right-to left
order, because area's that are added to the tree later are assumed to be
positioned over their previously added siblings.
\begin{figure}[h!]
\center
......@@ -738,17 +742,6 @@ transformation gestures. Because the propagation of these events is stopped,
overlapping polygons do not cause a problem. Figure \ref{fig:testappdiagram}
shows the tree structure used by the application.
Note that the overlay event area, though covering the whole screen surface, is
not the root event area. The overlay event area is placed on top of the
application window (being a rightmost sibling of the application window event
area in the tree). This is necessary, because the transformation trackers stop
event propagation. The hand tracker needs to capture all events to be able to
give an accurate representations of all fingers touching the screen Therefore,
the overlay should delegate events to the hand tracker before they are stopped
by a transformation tracker. Placing the overlay over the application window
forces the screen event area to delegate events to the overlay event area
first.
\testappdiagram
\section{Conclusions}
......@@ -837,7 +830,7 @@ of all requirements the set of events must meet to form the gesture.
A way to describe signatures on a multi-touch surface can be by the use of a
state machine of its touch objects. The states of a simple touch point could be
${down, move, up, hold}$ to indicate respectively that a point is put down, is
${down, move, hold, up}$ to indicate respectively that a point is put down, is
being moved, is held on a position for some time, and is released. In this
case, a ``drag'' gesture can be described by the sequence $down - move - up$
and a ``select'' gesture by the sequence $down - hold$. If the set of states is
......@@ -848,13 +841,14 @@ states can be added: ${start, stop}$ to indicate that a point starts and stops
moving. The resulting state transitions are sequences $down - start - move -
stop - up$ and $down - start - move - up$ (the latter does not include a $stop$
to indicate that the element must keep moving after the gesture had been
performed).
performed). The two sequences distinguish a ``drag'' gesture from a ``flick''
gesture respectively.
An additional way to describe even more complex gestures is to use other
gestures in a signature. An example is to combine $select - drag$ to specify
that an element must be selected before it can be dragged.
The application of a state machine to describe multi-touch gestures is an
The application of a state machine to describe multi-touch gestures is a
subject well worth exploring in the future.
\section{Daemon implementation}
......@@ -998,9 +992,7 @@ values are normalized using division by the number of touch points $N$. A
the current by the previous average distance to the centroid. Any movement of
the centroid is used for \emph{drag} gestures. When a dragged touch point is
released, a \emph{flick} gesture is triggered in the direction of the
\emph{drag} gesture. The application can use a \emph{flick} gesture to give
momentum to a dragged widget so that it keeps moving for some time after the
dragging stops.
\emph{drag} gesture.
Figure \ref{fig:transformationtracker} shows an example situation in which a
touch point is moved, triggering a \emph{pinch} gesture, a \emph{rotate}
......@@ -1009,16 +1001,19 @@ gesture and a \emph{drag} gesture.
\transformationtracker
The \emph{pinch} gesture in figure \ref{fig:pinchrotate} uses the ratio
$d_2:d_1$ to calculate its $scale$ parameter. The difference in distance to the
centroid must be divided by the number of touch points ($N$) used for the
gesture, yielding the difference $\frac{d_2 - d_1}{N}$. The $scale$ parameter
represents the scale relative to the previous situation, which results in the
following formula:
$d_2:d_1$ to calculate its $scale$ parameter. Note that the difference in
distance $d_2 - d_1$ and the difference in angle $\alpha$ both relate to a
single touch point. The \emph{pinch} and \emph{rotate} gestures that are
triggered relate to all touch points, using the average of distances and
angles. Since all except one of the touch points have not moved, their
differences in distance and angle are zero. Thus, the averages can be
calculated by dividing the differences in distance and angle of the moved touch
point by the number of touch points $N$. The $scale$ parameter represents the
scale relative to the previous situation, which results in the following
formula:
$$pinch.scale = \frac{d_1 + \frac{d_2 - d_1}{N}}{d_1}$$
The angle used for the \emph{rotate} gesture is also divided by the number of
touch points:
The angle used for the \emph{rotate} gesture is only divided by the number of
touch points to obtain an average rotation of all touch points:
$$rotate.angle = \frac{\alpha}{N}$$
\section{Hand tracker}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment