Commit 34c24e35 authored by Taddeüs Kroes's avatar Taddeüs Kroes

Rewrote 'Introduction' chapter in report.

parent 859174ba
......@@ -25,67 +25,68 @@
\parindent 0pt
\parskip 1.5ex plus 0.5ex minus 0.2ex
% Table of contant on separate page
% Table of content on separate page
\tableofcontents
\chapter{Introduction}
% Ruwe probleemstelling
Multi-touch interaction is becoming increasingly common, mostly due to the wide
use of touch screens in phones and tablets. When programming applications using
this method of interaction, the programmer needs an abstraction of the raw data
provided by the touch driver of the device. This abstraction exists in several
multi-touch application frameworks like Nokia's
Qt\footnote{\url{http://qt.nokia.com/}}. However, applications that do not use
these frameworks have no access to their multi-touch events.
% Aanleiding
This problem was observed during an attempt to create a multi-touch
``interactor'' class for the Visualization Toolkit \cite[VTK]{VTK}. Because VTK
provides the application framework here, it is undesirable to use an entire
framework like Qt simultaneously only for its multi-touch support.
% Ruw doel
The goal of this project is to define a generic multi-touch event triggering
architecture. To test the definition, a reference implementation is written in
Python.
\section{Definition of the problem}
% Hoofdvraag
The goal of this thesis is to a create generic architecture for a
multi-touch event triggering mechanism for use in multi-touch applications.
% Deelvragen
To design such an architecture properly, the following questions are relevant:
\begin{itemize}
\item What is the input of the architecture? Different touch drivers
have different API's. To be able to support different drivers
(which is highly desirable), there should be a translation from the
driver API to a fixed input format.
\item How can extendability be accomplished? The set of supported
events should not be limited to a single implementation, but an
application should be able to define its own custom events.
\item How can the architecture be used by different programming
languages? A generic architecture should not be limited to be used
in only one language.
% TODO: put Qt link in bibtex
Multi-touch devices enable a user to interact with software using intuitive
body gestures, rather than with interaction tools like mouse and keyboard.
With the upcoming use of touch screens in phones and tablets, multi-touch
interaction is becoming increasingly common.The driver of a touch device
provides low-level events. The most basic representation of these low-level
event consists of \emph{down}, \emph{move} and \emph{up} events.
Multi-touch gestures must be designed in such a way, that they can be
represented by a sequence of basic events. For example, a ``tap'' gesture can
be represented as a \emph{down} event that is followed by an \emph{up} event
within a certain time.
The translation process of driver-specific messages to basic events, and events
to multi-touch gestures is a process that is often embedded in multi-touch
application frameworks, like Nokia's Qt \cite{qt}. However, there is no
separate implementation of the process itself. Consequently, an application
developer who wants to use multi-touch interaction in an application is forced
to choose an application framework that includes support for multi-touch
gestures. Moreover, the set of supported gestures is limited by the application
framework. To incorporate some custom event in an application, the chosen
framework needs to provide a way to extend existing multi-touch gestures.
% Hoofdvraag
The goal of this thesis is to create a generic architecture for the support of
multi-touch gestures in applications. To test the design of the architecture, a
reference implementation is written in Python. The architecture should
incorporate the translation process of low-level driver messages to multi-touch
gestures. It should be able to run beside an application framework. The
definition of multi-touch gestures should allow extensions, so that custom
gestures can be defined.
% Deelvragen
To design such an architecture properly, the following questions are relevant:
\begin{itemize}
\item What is the input of the architecture? This is determined by the
output of multi-touch drivers.
\item How can extendability of the supported gestures be accomplished?
% TODO: zijn onderstaande nog relevant? beter omschrijven naar "Design"
% gerelateerde vragen?
\item How can the architecture be used by different programming languages?
A generic architecture should not be limited to be used in only one
language.
\item Can events be used by multiple processes at the same time? For
example, a network implementation could run as a service instead of
within a single application, triggering events in any application
that needs them.
\end{itemize}
% Afbakening
The scope of this thesis includes the design of a generic multi-touch
triggering architecture, a reference implementation of this design, and its
integration into a test case application. To be successful, the design
should allow for extensions to be added to any implementation.
The reference implementation is a Proof of Concept that translates TUIO
messages to some simple touch gestures that are used by some test
applications.
%Being a Proof of Concept, the reference implementation itself does not
%necessarily need to meet all the requirements of the design.
within a single application, triggering events in any application that
needs them.
\end{itemize}
% Afbakening
The scope of this thesis includes the design of a generic multi-touch
triggering architecture, a reference implementation of this design, and its
integration into a test case application. To be successful, the design should
allow for extensions to be added to any implementation.
The reference implementation is a Proof of Concept that translates TUIO
messages to some simple touch gestures that are used by a test application.
\section{Structure of this document}
......@@ -99,7 +100,7 @@ Python.
toolkit for the development of gesture-based applications. The toolkit
states that the best way to classify gestures is to use machine learning.
The programmer trains a program to recognize using the machine learning
library from the toolkit. The toolkit contains a callback-mechanism that
library from the toolkit. The toolkit contains a callback mechanism that
the programmer uses to execute custom code when a gesture is recognized.
Though multi-touch input is not directly supported by the toolkit, the
......@@ -407,8 +408,8 @@ current, The object it represents has been lifted from the screen.
SET provide information about movement. In the case of simple (x, y) positions,
only the movement vector of the position itself can be calculated. For more
complex objects such as fiducials, arguments like rotational position is also
included.
complex objects such as fiducials, arguments like rotational position and
acceleration are also included.
ALIVE and SET messages can be combined to create ``point down'', ``point move''
and ``point up'' events (as used by the \cite[.NET application]{win7touch}).
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment