Commit 28fbf26a authored by Taddeüs Kroes's avatar Taddeüs Kroes

Addressed some feedback items of the report.

parent c9e6ef00
......@@ -206,14 +206,18 @@
\end{figure}
}
\def\lefthand{\includegraphics[width=50pt]{data/hand.png}}
\def\righthand{\reflectbox{\includegraphics[width=50pt, angle=-45]{data/hand.png}}}
\def\examplefigureone{
\begin{figure}[h]
\center
% TODO: draw finger touch points as circles with rotating arrow
\begin{tikzpicture}
\draw node[draw, black, minimum width=190, minimum height=140] at (0,0) {};
\draw node[fill=gray!50, draw=black!70, minimum height=40, minimum width=40] at (-1,-1) {};
\draw node[draw=black!80, diamond, minimum height=50, minimum width=50] at (1.2,1) {};
\draw node[fill=gray!50, draw=black!70, minimum height=40, minimum width=40] at (-1,-1) {\lefthand};
\draw node[] at (1.2,1) {\righthand};
\draw node[draw=black!80, diamond, minimum height=70, minimum width=70] at (1.2,1) {};
\end{tikzpicture}
\caption{Two squares on the screen both listen to rotation. The user
should be able to ``grab'' each of the squares independently and rotate
......
......@@ -18,17 +18,18 @@
% Title page
\maketitle
\begin{abstract}
Device drivers provide a primitive set of messages. Applications that use
complex gesture-based interaction need to translate these events to complex
gestures, and map these gestures to elements in an application. This paper
presents a generic architecture for the detection of complex gestures in an
application. The architecture translates driver-specific messages to a
common set of ``events''. The events are then delegated to a tree of
``areas'', which are used to group events and assign it to an element in
Applications that use
complex gesture-based interaction need to translate primitive messages from
low-level device drivers to complex, high-level gestures, and map these
gestures to elements in an application. This report presents a generic
architecture for the detection of complex gestures in an application. The
architecture translates device driver messages to a common set of
``events''. The events are then delegated to a tree of ``areas'', which are
used to separate groups of events and assign these groups to an element in
the application. Gesture detection is performed on a group of events
assigned to an area, using detection units called ``gesture tackers''. An
implementation of the architecture should run as a daemon process, serving
gestures to multiple applications at the same time. A reference
implementation of the architecture as a daemon process would be capable of
serving gestures to multiple applications at the same time. A reference
implementation and two test case applications have been created to test the
effectiveness of the architecture design.
\end{abstract}
......@@ -170,13 +171,13 @@ goal is to test the effectiveness of the design and detect its shortcomings.
\section{Introduction}
This chapter describes the realization of a design for the generic
multi-touch gesture detection architecture. The architecture is represented
as diagram of relations between different components. Sections
\ref{sec:driver-support} to \ref{sec:daemon} define requirements for the
architecture, and extend the diagram with components that meet these
requirements. Section \ref{sec:example} describes an example usage of the
architecture in an application.
This chapter describes a design for a generic multi-touch gesture detection
architecture. The architecture is represented as diagram of relations
between different components. Sections \ref{sec:driver-support} to
\ref{sec:daemon} define requirements for the architecture, and extend the
diagram with components that meet these requirements. Section
\ref{sec:example} describes an example usage of the architecture in an
application.
The input of the architecture comes from a multi-touch device driver.
The task of the architecture is to translate this input to multi-touch
......@@ -193,14 +194,14 @@ goal is to test the effectiveness of the design and detect its shortcomings.
multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate
low-level touch events (see appendix \ref{app:tuio} for more details).
These messages are specific to the API of the TUIO protocol. Other drivers
may use very different messages types. To support more than one driver in
the architecture, there must be some translation from driver-specific
messages to a common format for primitive touch events. After all, the
gesture detection logic in a ``generic'' architecture should not be
implemented based on driver-specific messages. The event types in this
format should be chosen so that multiple drivers can trigger the same
events. If each supported driver would add its own set of event types to
the common format, it the purpose of being ``common'' would be defeated.
may use different messages types. To support more than one driver in the
architecture, there must be some translation from driver-specific messages
to a common format for primitive touch events. After all, the gesture
detection logic in a ``generic'' architecture should not be implemented
based on driver-specific messages. The event types in this format should be
chosen so that multiple drivers can trigger the same events. If each
supported driver would add its own set of event types to the common format,
the purpose of it being ``common'' would be defeated.
A minimal expectation for a touch device driver is that it detects simple
touch points, with a ``point'' being an object at an $(x, y)$ position on
......@@ -231,7 +232,8 @@ goal is to test the effectiveness of the design and detect its shortcomings.
Because driver implementations have a common output format in the form of
events, multiple event drivers can run at the same time (see figure
\ref{fig:multipledrivers}).
\ref{fig:multipledrivers}). This design feature allows low-level events
from multiple devices to be aggregated into high-level gestures.
\multipledriversdiagram
......@@ -243,27 +245,26 @@ goal is to test the effectiveness of the design and detect its shortcomings.
screen and therefore generate events that simply identify the screen
location at which an event takes place. In order to be able to direct a
gesture to a particular widget on screen, an application programmer must
restrict the occurrence of a gesture to the area of the screen covered by
that widget. An important question is if the architecture should offer a
solution to this problem, or leave it to the application developer to
assign gestures to a widget.
restrict a gesture to the area of the screen covered by that widget. An
important question is if the architecture should offer a solution to this
problem, or leave it to the application developer.
The latter case generates a problem when a gesture must be able to occur at
different screen positions at the same time. Consider the example in figure
\ref{fig:ex1}, where two squares must be able to be rotated independently
at the same time. If the developer is left the task to assign a gesture to
one of the squares, the event analysis component in figure
\ref{fig:driverdiagram} receives all events that occur on the screen.
Assuming that the rotation detection logic detects a single rotation
gesture based on all of its input events, without detecting clusters of
input events, only one rotation gesture can be triggered at the same time.
When a user attempts to ``grab'' one rectangle with each hand, the events
triggered by all fingers are combined to form a single rotation gesture
instead of two separate gestures.
\ref{fig:ex1}, where two squares can be rotated independently at the same
time. If the developer is left the task to assign a gesture to one of the
squares, the event analysis component in figure \ref{fig:driverdiagram}
receives all events that occur on the screen. Assuming that the rotation
detection logic detects a single rotation gesture based on all of its input
events, without detecting clusters of input events, only one rotation
gesture can be triggered at the same time. When a user attempts to
``grab'' one rectangle with each hand, the events triggered by all fingers
are combined to form a single rotation gesture instead of two separate
gestures.
\examplefigureone
To overcome this problem, groups of events must be separated by the event
To overcome this problem, groups of events must be clustered by the event
analysis component before any detection logic is executed. An obvious
solution for the given example is to incorporate this separation in the
rotation detection logic itself, using a distance threshold that decides if
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment