Commit b091c913 authored by Taddeüs Kroes's avatar Taddeüs Kroes

Corrected some typo's and moved a paragraph.

parent be6d8ffd
......@@ -103,7 +103,7 @@
\node[block, below of=eventdriver, dashed] (analysis) {Event analysis}
edge[linefrom] (eventdriver)
edge[linefrom] node[right=5] {events} (secondeventdriver);
edge[linefrom] node[right=5pt] {events} (secondeventdriver);
\node[block, below of=analysis] {Application}
edge[linefrom] node[right, near start] {gestures} (analysis);
......
......@@ -89,11 +89,7 @@ detection for every new gesture-based application.
Chapter \ref{chapter:related} describes related work that inspired a design
for the architecture. The design is presented in chapter
\ref{chapter:design}. Sections \ref{sec:multipledrivers} to
\ref{sec:daemon} define requirements for the architecture, and introduce
architecture components that meet these requirements. Section
\ref{sec:example} then shows the use of the architecture in an example
application. Chapter \ref{chapter:testapps} presents a reference
\ref{chapter:design}. Chapter \ref{chapter:testapps} presents a reference
implementation of the architecture, an two test case applications that show
the practical use of its components as presented in chapter
\ref{chapter:design}. Finally, some suggestions for future research on the
......@@ -153,11 +149,11 @@ detection for every new gesture-based application.
The alternative to machine learning is to define a predefined set of rules
for each gesture. Manoj Kumar \cite{win7touch} presents a Windows 7
application, written in Microsofts .NET, which detects a set of basic
directional gestures based the movement of a stylus. The complexity of the
code is managed by the separation of different gesture types in different
detection units called ``gesture trackers''. The application shows that
predefined gesture detection rules do not necessarily produce unmanageable
code.
directional gestures based on the movement of a stylus. The complexity of
the code is managed by the separation of different gesture types in
different detection units called ``gesture trackers''. The application
shows that predefined gesture detection rules do not necessarily produce
unmanageable code.
\section{Analysis of related work}
......@@ -256,7 +252,7 @@ detection for every new gesture-based application.
used.
Because driver implementations have a common output format in the form of
events, multiple event drivers can run at the same time (see figure
events, multiple event drivers can be used at the same time (see figure
\ref{fig:multipledrivers}). This design feature allows low-level events
from multiple devices to be aggregated into high-level gestures.
......@@ -272,8 +268,8 @@ detection for every new gesture-based application.
an event takes place. User interfaces of applications that do not run in
full screen modus are contained in a window. Events which occur outside the
application window should not be handled by the application in most cases.
What's more, widget within the application window itself should be able to
respond to different gestures. E.g. a button widget may respond to a
What's more, a widget within the application window itself should be able
to respond to different gestures. E.g. a button widget may respond to a
``tap'' gesture to be activated, whereas the application window responds to
a ``pinch'' gesture to be resized. In order to be able to direct a gesture
to a particular widget in an application, a gesture must be restricted to
......@@ -304,9 +300,9 @@ detection for every new gesture-based application.
application logic, which is a tedious and error-prone task for the
developer.
A better solution is to group events that occur inside the area covered by
a widget, before passing them on to a gesture detection component.
Different gesture detection components can then detect gestures
The architecture described here groups events that occur inside the area
covered by a widget, before passing them on to a gesture detection
component. Different gesture detection components can then detect gestures
simultaneously, based on different sets of input events. An area of the
screen surface will be represented by an \emph{event area}. An event area
filters input events based on their location, and then delegates events to
......@@ -318,7 +314,7 @@ detection for every new gesture-based application.
represented by two event areas, each having a different rotation detection
component.
\subsection*{Callback mechanism}
\subsection{Callback mechanism}
When a gesture is detected by a gesture detection component, it must be
handled by the client application. A common way to handle events in an
......@@ -333,38 +329,12 @@ detection for every new gesture-based application.
\areadiagram
%Note that the boundaries of an area are only used to group events, not
%gestures. A gesture could occur outside the area that contains its
%originating events, as illustrated by the example in figure \ref{fig:ex2}.
%\examplefiguretwo
A remark must be made about the use of event areas to assign events to the
detection of some gesture. The concept of an event area is based on the
assumption that the set or originating events that form a particular
gesture, can be determined based exclusively on the location of the events.
This is a reasonable assumption for simple touch objects whose only
parameter is a position, such as a pen or a human finger. However, more
complex touch objects can have additional parameters, such as rotational
orientation or color. An even more generic concept is the \emph{event
filter}, which detects whether an event should be assigned to a particular
gesture detection component based on all available parameters. This level
of abstraction provides additional methods of interaction. For example, a
camera-based multi-touch surface could make a distinction between gestures
performed with a blue gloved hand, and gestures performed with a green
gloved hand.
As mentioned in the introduction chapter [\ref{chapter:introduction}], the
scope of this thesis is limited to multi-touch surface based devices, for
which the \emph{event area} concept suffices. Section \ref{sec:eventfilter}
explores the possibility of event areas to be replaced with event filters.
\subsection{Area tree}
\label{sec:tree}
The most simple usage of event areas in the architecture would be a list of
event areas. When the event driver delegates an event, it is accepted by
each event area that contains the event coordinates.
A basic usage of event areas in the architecture would be a list of event
areas. When the event driver delegates an event, it is accepted by each
event area that contains the event coordinates.
If the architecture were to be used in combination with an application
framework like GTK \cite{GTK}, each GTK widget that responds to gestures
......@@ -375,15 +345,15 @@ detection for every new gesture-based application.
too.
This process is simplified by the arrangement of event areas in a tree
structure. A root event area represents the panel, containing five other
structure. A root event area represents the panel, containing five other
event areas which are positioned relative to the root area. The relative
positions do not need to be updated when the panel area changes its
position. GUI frameworks, like GTK, use this kind of tree structure to
manage graphical widgets.
position. GUI frameworks use this kind of tree structure to manage
graphical widgets.
If the GUI toolkit provides an API for requesting the position and size of
a widget, a recommended first step when developing an application is to
create some subclass of the area that automatically synchronizes with the
create a subclass of the area that automatically synchronizes with the
position of a widget from the GUI framework.
\subsection{Event propagation}
......@@ -430,10 +400,28 @@ detection for every new gesture-based application.
When regular propagation is stopped, the event is propagated to other
gesture detection components first, before actually being stopped.
\eventpropagationfigure
\newpage
\eventpropagationfigure
\section{Detecting gestures from events}
The concept of an event area is based on the assumption that the set of
originating events that form a particular gesture, can be determined based
exclusively on the location of the events. This is a reasonable assumption
for simple touch objects whose only parameter is a position, such as a pen
or a human finger. However, more complex touch objects can have additional
parameters, such as rotational orientation or color. An even more generic
concept is the \emph{event filter}, which detects whether an event should
be assigned to a particular gesture detection component based on all
available parameters. This level of abstraction provides additional methods
of interaction. For example, a camera-based multi-touch surface could make
a distinction between gestures performed with a blue gloved hand, and
gestures performed with a green gloved hand.
As mentioned in the introduction chapter [\ref{chapter:introduction}], the
scope of this thesis is limited to multi-touch surface based devices, for
which the \emph{event area} concept suffices. Section \ref{sec:eventfilter}
explores the possibility of event areas to be replaced with event filters.
\section{Detecting gestures from low-level events}
\label{sec:gesture-detection}
The low-level events that are grouped by an event area must be translated
......@@ -555,14 +543,14 @@ start the GUI main loop in the current thread
\examplediagram
\chapter{Test applications}
\chapter{Implementation and test applications}
\label{chapter:testapps}
A reference implementation of the design has been written in Python. Two test
applications have been created to test if the design ``works'' in a practical
application, and to detect its flaws. One application is mainly used to test
the gesture tracker implementations. The other application uses multiple event
areas in a tree structure, demonstrating event delegation and propagation. Teh
areas in a tree structure, demonstrating event delegation and propagation. The
second application also defines a custom gesture tracker.
To test multi-touch interaction properly, a multi-touch device is required. The
......@@ -710,9 +698,9 @@ first.
\testappdiagram
%\section{Discussion}
%
%\emph{TODO: Tekortkomingen aangeven die naar voren komen uit de tests}
\section{Results}
\emph{TODO: Tekortkomingen aangeven die naar voren komen uit de tests}
% Verschillende apparaten/drivers geven een ander soort primitieve events af.
% Een vertaling van deze device-specifieke events naar een algemeen formaat van
......@@ -737,12 +725,12 @@ first.
\label{sec:eventfilter}
As mentioned in section \ref{sec:areas}, the concept of an event area is based
on the assumption that the set or originating events that form a particular
on the assumption that the set of originating events that form a particular
gesture, can be determined based exclusively on the location of the events.
Since this thesis focuses on multi-touch surface based devices, and every
object on a multi-touch surface has a position, this assumption is valid.
However, the design of the architecture is meant to be more generic; to provide
a structured design of managing gesture detection.
a structured design for managing gesture detection.
An in-air gesture detection device, such as the Microsoft Kinect \cite{kinect},
provides 3D positions. Some multi-touch tables work with a camera that can also
......@@ -770,8 +758,8 @@ whether multi-touch gestures can be described in a formal way so that explicit
detection code can be avoided.
\cite{GART} and \cite{conf/gw/RigollKE97} propose the use of machine learning
to recognizes gestures. To use machine learning, a set of input events forming
a particular gesture must be represented as a feature vector. A learning set
to recognize gestures. To use machine learning, a set of input events forming a
particular gesture must be represented as a feature vector. A learning set
containing a set of feature vectors that represent some gesture ``teaches'' the
machine what the feature of the gesture looks like.
......@@ -810,8 +798,8 @@ subject well worth exploring in the future.
\section{Daemon implementation}
Section \ref{sec:daemon} proposes the usage of a network protocol to
communicate between an architecture implementation and (multiple) gesture-based
Section \ref{sec:daemon} proposes the use of a network protocol to communicate
between an architecture implementation and (multiple) gesture-based
applications, as illustrated in figure \ref{fig:daemon}. The reference
implementation does not support network communication. If the architecture
design is to become successful in the future, the implementation of network
......@@ -823,7 +811,7 @@ the basis for its communication layer.
If an implementation of the architecture will be released, a good idea would be
to do so within a community of application developers. A community can
contribute to a central database of gesture trackers, making the interaction
from their applications available for use other applications.
from their applications available for use in other applications.
Ideally, a user can install a daemon process containing the architecture so
that it is usable for any gesture-based application on the device. Applications
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment