|
|
@@ -194,19 +194,12 @@ detection for every new gesture-based application.
|
|
|
at the same time.
|
|
|
|
|
|
This chapter describes a design for such an architecture. The architecture
|
|
|
- is represented as diagram of relations between different components.
|
|
|
- Sections \ref{sec:multipledrivers} to \ref{sec:daemon} define requirements
|
|
|
- for the architecture, and extend this diagram with components that meet
|
|
|
- these requirements. Section \ref{sec:example} describes an example usage of
|
|
|
- the architecture in an application.
|
|
|
+ components are shown by figure \ref{fig:fulldiagram}. Sections
|
|
|
+ \ref{sec:multipledrivers} to \ref{sec:daemon} explain the use of all
|
|
|
+ components in detail.
|
|
|
|
|
|
- The input of the architecture comes from a multi-touch device driver.
|
|
|
- The task of the architecture is to translate this input to multi-touch
|
|
|
- gestures that are used by an application, as illustrated in figure
|
|
|
- \ref{fig:basicdiagram}. In the course of this chapter, the diagram is
|
|
|
- extended with the different components of the architecture.
|
|
|
-
|
|
|
- \basicdiagram
|
|
|
+ \fulldiagram
|
|
|
+ \newpage
|
|
|
|
|
|
\section{Supporting multiple drivers}
|
|
|
\label{sec:multipledrivers}
|
|
|
@@ -216,10 +209,10 @@ detection for every new gesture-based application.
|
|
|
low-level touch events (see appendix \ref{app:tuio} for more details).
|
|
|
These messages are specific to the API of the TUIO protocol. Other drivers
|
|
|
may use different messages types. To support more than one driver in the
|
|
|
- architecture, there must be some translation from driver-specific messages
|
|
|
+ architecture, there must be some translation from device-specific messages
|
|
|
to a common format for primitive touch events. After all, the gesture
|
|
|
detection logic in a ``generic'' architecture should not be implemented
|
|
|
- based on driver-specific messages. The event types in this format should be
|
|
|
+ based on device-specific messages. The event types in this format should be
|
|
|
chosen so that multiple drivers can trigger the same events. If each
|
|
|
supported driver would add its own set of event types to the common format,
|
|
|
the purpose of it being ``common'' would be defeated.
|
|
|
@@ -237,14 +230,11 @@ detection for every new gesture-based application.
|
|
|
TUIO protocol. Another driver that can keep apart rotated objects from
|
|
|
simple touch points could also trigger them.
|
|
|
|
|
|
- The component that translates driver-specific messages to common events,
|
|
|
+ The component that translates device-specific messages to common events,
|
|
|
will be called the \emph{event driver}. The event driver runs in a loop,
|
|
|
receiving and analyzing driver messages. When a sequence of messages is
|
|
|
analyzed as an event, the event driver delegates the event to other
|
|
|
- components in the architecture for translation to gestures. This
|
|
|
- communication flow is illustrated in figure \ref{fig:driverdiagram}.
|
|
|
-
|
|
|
- \driverdiagram
|
|
|
+ components in the architecture for translation to gestures.
|
|
|
|
|
|
Support for a touch driver can be added by adding an event driver
|
|
|
implementation. The choice of event driver implementation that is used in an
|
|
|
@@ -277,13 +267,13 @@ detection for every new gesture-based application.
|
|
|
the architecture should offer a solution to this problem, or leave the task
|
|
|
of assigning gestures to application widgets to the application developer.
|
|
|
|
|
|
- If the architecture does not provide a solution, the ``Event analysis''
|
|
|
- component in figure \ref{fig:multipledrivers} receives all events that
|
|
|
- occur on the screen surface. The gesture detection logic thus uses all
|
|
|
- events as input to detect a gesture. This leaves no possibility for a
|
|
|
- gesture to occur at multiple screen positions at the same time. The problem
|
|
|
- is illustrated in figure \ref{fig:ex1}, where two widgets on the screen can
|
|
|
- be rotated independently. The rotation detection component that detects
|
|
|
+ If the architecture does not provide a solution, the ``gesture detection''
|
|
|
+ component in figure \ref{fig:fulldiagram} receives all events that occur on
|
|
|
+ the screen surface. The gesture detection logic thus uses all events as
|
|
|
+ input to detect a gesture. This leaves no possibility for a gesture to
|
|
|
+ occur at multiple screen positions at the same time. The problem is
|
|
|
+ illustrated in figure \ref{fig:ex1}, where two widgets on the screen can be
|
|
|
+ rotated independently. The rotation detection component that detects
|
|
|
rotation gestures receives all four fingers as input. If the two groups of
|
|
|
finger events are not separated by cluster detection, only one rotation
|
|
|
event will occur.
|
|
|
@@ -304,7 +294,7 @@ detection for every new gesture-based application.
|
|
|
covered by a widget, before passing them on to a gesture detection
|
|
|
component. Different gesture detection components can then detect gestures
|
|
|
simultaneously, based on different sets of input events. An area of the
|
|
|
- screen surface will be represented by an \emph{event area}. An event area
|
|
|
+ screen surface is represented by an \emph{event area}. An event area
|
|
|
filters input events based on their location, and then delegates events to
|
|
|
gesture detection components that are assigned to the event area. Events
|
|
|
which are located outside the event area are not delegated to its gesture
|
|
|
@@ -312,7 +302,11 @@ detection for every new gesture-based application.
|
|
|
|
|
|
In the example of figure \ref{fig:ex1}, the two rotatable widgets can be
|
|
|
represented by two event areas, each having a different rotation detection
|
|
|
- component.
|
|
|
+ component. Each event area can consist of four corner locations of the
|
|
|
+ square it represents. To detect whether an event is located inside a
|
|
|
+ square, the event areas use a point-in-polygon (PIP) test \cite{PIP}. It is
|
|
|
+ the task of the client application to update the corner locations of the
|
|
|
+ event area with those of the widget.
|
|
|
|
|
|
\subsection{Callback mechanism}
|
|
|
|
|
|
@@ -324,10 +318,6 @@ detection for every new gesture-based application.
|
|
|
callback mechanism to handle gestures in an application. Callback handlers
|
|
|
are bound to event areas, since events areas controls the grouping of
|
|
|
events and thus the occurrence of gestures in an area of the screen.
|
|
|
- Figure \ref{fig:areadiagram} shows the position of areas in the
|
|
|
- architecture.
|
|
|
-
|
|
|
- \areadiagram
|
|
|
|
|
|
\subsection{Area tree}
|
|
|
\label{sec:tree}
|
|
|
@@ -337,12 +327,11 @@ detection for every new gesture-based application.
|
|
|
event area that contains the event coordinates.
|
|
|
|
|
|
If the architecture were to be used in combination with an application
|
|
|
- framework like GTK \cite{GTK}, each GTK widget that responds to gestures
|
|
|
- should have a mirroring event area that synchronizes its location with that
|
|
|
- of the widget. Consider a panel with five buttons that all listen to a
|
|
|
- ``tap'' event. If the location of the panel changes as a result of movement
|
|
|
- of the application window, the positions of all buttons have to be updated
|
|
|
- too.
|
|
|
+ framework, each widget that responds to gestures should have a mirroring
|
|
|
+ event area that synchronizes its location with that of the widget. Consider
|
|
|
+ a panel with five buttons that all listen to a ``tap'' event. If the
|
|
|
+ location of the panel changes as a result of movement of the application
|
|
|
+ window, the positions of all buttons have to be updated too.
|
|
|
|
|
|
This process is simplified by the arrangement of event areas in a tree
|
|
|
structure. A root event area represents the panel, containing five other
|
|
|
@@ -354,7 +343,10 @@ detection for every new gesture-based application.
|
|
|
If the GUI toolkit provides an API for requesting the position and size of
|
|
|
a widget, a recommended first step when developing an application is to
|
|
|
create a subclass of the area that automatically synchronizes with the
|
|
|
- position of a widget from the GUI framework.
|
|
|
+ position of a widget from the GUI framework. For example, the test
|
|
|
+ application described in section \ref{sec:testapp} extends the GTK
|
|
|
+ \cite{GTK} application window widget with the functionality of a
|
|
|
+ rectangular event area, to direct touch events to an application window.
|
|
|
|
|
|
\subsection{Event propagation}
|
|
|
\label{sec:eventpropagation}
|
|
|
@@ -394,13 +386,12 @@ detection for every new gesture-based application.
|
|
|
An additional type of event propagation is ``immediate propagation'', which
|
|
|
indicates propagation of an event from one gesture detection component to
|
|
|
another. This is applicable when an event area uses more than one gesture
|
|
|
- detection component. One of the components can stop the immediate
|
|
|
+ detection component. When regular propagation is stopped, the event is
|
|
|
+ propagated to other gesture detection components first, before actually
|
|
|
+ being stopped. One of the components can also stop the immediate
|
|
|
propagation of an event, so that the event is not passed to the next
|
|
|
gesture detection component, nor to the ancestors of the event area.
|
|
|
- When regular propagation is stopped, the event is propagated to other
|
|
|
- gesture detection components first, before actually being stopped.
|
|
|
|
|
|
- \newpage
|
|
|
\eventpropagationfigure
|
|
|
|
|
|
The concept of an event area is based on the assumption that the set of
|
|
|
@@ -467,13 +458,9 @@ detection for every new gesture-based application.
|
|
|
detection component defines a simple function that compares event
|
|
|
coordinates.
|
|
|
|
|
|
- \trackerdiagram
|
|
|
-
|
|
|
When a gesture tracker detects a gesture, this gesture is triggered in the
|
|
|
corresponding event area. The event area then calls the callbacks which are
|
|
|
- bound to the gesture type by the application. Figure
|
|
|
- \ref{fig:trackerdiagram} shows the position of gesture trackers in the
|
|
|
- architecture.
|
|
|
+ bound to the gesture type by the application.
|
|
|
|
|
|
The use of gesture trackers as small detection units provides extendability
|
|
|
of the architecture. A developer can write a custom gesture tracker and
|
|
|
@@ -643,6 +630,7 @@ the entire touch surface. The output of the application can be seen in figure
|
|
|
\end{figure}
|
|
|
|
|
|
\section{GTK+/Cairo application}
|
|
|
+\label{sec:testapp}
|
|
|
|
|
|
The second test application uses the GIMP toolkit (GTK+) \cite{GTK} to create
|
|
|
its user interface. Since GTK+ defines a main event loop that is started in
|