Commit 28fbf26a authored by Taddeüs Kroes's avatar Taddeüs Kroes

Addressed some feedback items of the report.

parent c9e6ef00
...@@ -206,14 +206,18 @@ ...@@ -206,14 +206,18 @@
\end{figure} \end{figure}
} }
\def\lefthand{\includegraphics[width=50pt]{data/hand.png}}
\def\righthand{\reflectbox{\includegraphics[width=50pt, angle=-45]{data/hand.png}}}
\def\examplefigureone{ \def\examplefigureone{
\begin{figure}[h] \begin{figure}[h]
\center \center
% TODO: draw finger touch points as circles with rotating arrow % TODO: draw finger touch points as circles with rotating arrow
\begin{tikzpicture} \begin{tikzpicture}
\draw node[draw, black, minimum width=190, minimum height=140] at (0,0) {}; \draw node[draw, black, minimum width=190, minimum height=140] at (0,0) {};
\draw node[fill=gray!50, draw=black!70, minimum height=40, minimum width=40] at (-1,-1) {}; \draw node[fill=gray!50, draw=black!70, minimum height=40, minimum width=40] at (-1,-1) {\lefthand};
\draw node[draw=black!80, diamond, minimum height=50, minimum width=50] at (1.2,1) {}; \draw node[] at (1.2,1) {\righthand};
\draw node[draw=black!80, diamond, minimum height=70, minimum width=70] at (1.2,1) {};
\end{tikzpicture} \end{tikzpicture}
\caption{Two squares on the screen both listen to rotation. The user \caption{Two squares on the screen both listen to rotation. The user
should be able to ``grab'' each of the squares independently and rotate should be able to ``grab'' each of the squares independently and rotate
......
...@@ -18,17 +18,18 @@ ...@@ -18,17 +18,18 @@
% Title page % Title page
\maketitle \maketitle
\begin{abstract} \begin{abstract}
Device drivers provide a primitive set of messages. Applications that use Applications that use
complex gesture-based interaction need to translate these events to complex complex gesture-based interaction need to translate primitive messages from
gestures, and map these gestures to elements in an application. This paper low-level device drivers to complex, high-level gestures, and map these
presents a generic architecture for the detection of complex gestures in an gestures to elements in an application. This report presents a generic
application. The architecture translates driver-specific messages to a architecture for the detection of complex gestures in an application. The
common set of ``events''. The events are then delegated to a tree of architecture translates device driver messages to a common set of
``areas'', which are used to group events and assign it to an element in ``events''. The events are then delegated to a tree of ``areas'', which are
used to separate groups of events and assign these groups to an element in
the application. Gesture detection is performed on a group of events the application. Gesture detection is performed on a group of events
assigned to an area, using detection units called ``gesture tackers''. An assigned to an area, using detection units called ``gesture tackers''. An
implementation of the architecture should run as a daemon process, serving implementation of the architecture as a daemon process would be capable of
gestures to multiple applications at the same time. A reference serving gestures to multiple applications at the same time. A reference
implementation and two test case applications have been created to test the implementation and two test case applications have been created to test the
effectiveness of the architecture design. effectiveness of the architecture design.
\end{abstract} \end{abstract}
...@@ -170,13 +171,13 @@ goal is to test the effectiveness of the design and detect its shortcomings. ...@@ -170,13 +171,13 @@ goal is to test the effectiveness of the design and detect its shortcomings.
\section{Introduction} \section{Introduction}
This chapter describes the realization of a design for the generic This chapter describes a design for a generic multi-touch gesture detection
multi-touch gesture detection architecture. The architecture is represented architecture. The architecture is represented as diagram of relations
as diagram of relations between different components. Sections between different components. Sections \ref{sec:driver-support} to
\ref{sec:driver-support} to \ref{sec:daemon} define requirements for the \ref{sec:daemon} define requirements for the architecture, and extend the
architecture, and extend the diagram with components that meet these diagram with components that meet these requirements. Section
requirements. Section \ref{sec:example} describes an example usage of the \ref{sec:example} describes an example usage of the architecture in an
architecture in an application. application.
The input of the architecture comes from a multi-touch device driver. The input of the architecture comes from a multi-touch device driver.
The task of the architecture is to translate this input to multi-touch The task of the architecture is to translate this input to multi-touch
...@@ -193,14 +194,14 @@ goal is to test the effectiveness of the design and detect its shortcomings. ...@@ -193,14 +194,14 @@ goal is to test the effectiveness of the design and detect its shortcomings.
multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate multi-touch devices. TUIO uses ALIVE- and SET-messages to communicate
low-level touch events (see appendix \ref{app:tuio} for more details). low-level touch events (see appendix \ref{app:tuio} for more details).
These messages are specific to the API of the TUIO protocol. Other drivers These messages are specific to the API of the TUIO protocol. Other drivers
may use very different messages types. To support more than one driver in may use different messages types. To support more than one driver in the
the architecture, there must be some translation from driver-specific architecture, there must be some translation from driver-specific messages
messages to a common format for primitive touch events. After all, the to a common format for primitive touch events. After all, the gesture
gesture detection logic in a ``generic'' architecture should not be detection logic in a ``generic'' architecture should not be implemented
implemented based on driver-specific messages. The event types in this based on driver-specific messages. The event types in this format should be
format should be chosen so that multiple drivers can trigger the same chosen so that multiple drivers can trigger the same events. If each
events. If each supported driver would add its own set of event types to supported driver would add its own set of event types to the common format,
the common format, it the purpose of being ``common'' would be defeated. the purpose of it being ``common'' would be defeated.
A minimal expectation for a touch device driver is that it detects simple A minimal expectation for a touch device driver is that it detects simple
touch points, with a ``point'' being an object at an $(x, y)$ position on touch points, with a ``point'' being an object at an $(x, y)$ position on
...@@ -231,7 +232,8 @@ goal is to test the effectiveness of the design and detect its shortcomings. ...@@ -231,7 +232,8 @@ goal is to test the effectiveness of the design and detect its shortcomings.
Because driver implementations have a common output format in the form of Because driver implementations have a common output format in the form of
events, multiple event drivers can run at the same time (see figure events, multiple event drivers can run at the same time (see figure
\ref{fig:multipledrivers}). \ref{fig:multipledrivers}). This design feature allows low-level events
from multiple devices to be aggregated into high-level gestures.
\multipledriversdiagram \multipledriversdiagram
...@@ -243,27 +245,26 @@ goal is to test the effectiveness of the design and detect its shortcomings. ...@@ -243,27 +245,26 @@ goal is to test the effectiveness of the design and detect its shortcomings.
screen and therefore generate events that simply identify the screen screen and therefore generate events that simply identify the screen
location at which an event takes place. In order to be able to direct a location at which an event takes place. In order to be able to direct a
gesture to a particular widget on screen, an application programmer must gesture to a particular widget on screen, an application programmer must
restrict the occurrence of a gesture to the area of the screen covered by restrict a gesture to the area of the screen covered by that widget. An
that widget. An important question is if the architecture should offer a important question is if the architecture should offer a solution to this
solution to this problem, or leave it to the application developer to problem, or leave it to the application developer.
assign gestures to a widget.
The latter case generates a problem when a gesture must be able to occur at The latter case generates a problem when a gesture must be able to occur at
different screen positions at the same time. Consider the example in figure different screen positions at the same time. Consider the example in figure
\ref{fig:ex1}, where two squares must be able to be rotated independently \ref{fig:ex1}, where two squares can be rotated independently at the same
at the same time. If the developer is left the task to assign a gesture to time. If the developer is left the task to assign a gesture to one of the
one of the squares, the event analysis component in figure squares, the event analysis component in figure \ref{fig:driverdiagram}
\ref{fig:driverdiagram} receives all events that occur on the screen. receives all events that occur on the screen. Assuming that the rotation
Assuming that the rotation detection logic detects a single rotation detection logic detects a single rotation gesture based on all of its input
gesture based on all of its input events, without detecting clusters of events, without detecting clusters of input events, only one rotation
input events, only one rotation gesture can be triggered at the same time. gesture can be triggered at the same time. When a user attempts to
When a user attempts to ``grab'' one rectangle with each hand, the events ``grab'' one rectangle with each hand, the events triggered by all fingers
triggered by all fingers are combined to form a single rotation gesture are combined to form a single rotation gesture instead of two separate
instead of two separate gestures. gestures.
\examplefigureone \examplefigureone
To overcome this problem, groups of events must be separated by the event To overcome this problem, groups of events must be clustered by the event
analysis component before any detection logic is executed. An obvious analysis component before any detection logic is executed. An obvious
solution for the given example is to incorporate this separation in the solution for the given example is to incorporate this separation in the
rotation detection logic itself, using a distance threshold that decides if rotation detection logic itself, using a distance threshold that decides if
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment