Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
M
multitouch
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Taddeüs Kroes
multitouch
Commits
7a0b1b32
Commit
7a0b1b32
authored
Jun 26, 2012
by
Taddeüs Kroes
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Worked on report.
parent
7b535751
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
38 additions
and
33 deletions
+38
-33
docs/data/diagrams.tex
docs/data/diagrams.tex
+7
-7
docs/report.tex
docs/report.tex
+31
-26
No files found.
docs/data/diagrams.tex
View file @
7a0b1b32
...
@@ -253,13 +253,13 @@
...
@@ -253,13 +253,13 @@
edge[linefrom, dotted, bend left=65] node[left]
{
4
}
(gray);
edge[linefrom, dotted, bend left=65] node[left]
{
4
}
(gray);
\end{tikzpicture}
\end{tikzpicture}
}
}
\caption
{
Two nested squares both listen to rotation gestures. The two
\caption
{
figures both show a touch object triggering an event, which is
Two nested squares both listen to rotation gestures. The two
delegated through the event area tree in the order indicated by the numbered
figures both show a touch object triggering an event, which is
arrow labels. Normal arrows represent events, dotted arrows represent
delegated through the event area tree in the order indicated by the
gestures. Note that the dotted arrows only represent the path a gesture
numbered arrow labels. Dotted arrows represent a flow of gestures,
would travel in the tree
\emph
{
if triggered
}
, not an actual triggered
regular arrows represent events.
gesture.
}
}
\label
{
fig:eventpropagation
}
\label
{
fig:eventpropagation
}
\end{figure}
\end{figure}
}
}
...
...
docs/report.tex
View file @
7a0b1b32
...
@@ -381,7 +381,8 @@ detection for every new gesture-based application.
...
@@ -381,7 +381,8 @@ detection for every new gesture-based application.
area in the tree that contains the triggered event. That event area should
area in the tree that contains the triggered event. That event area should
be the first to delegate the event to its gesture detection components, and
be the first to delegate the event to its gesture detection components, and
then propagate the event up in the tree to its ancestors. A gesture
then propagate the event up in the tree to its ancestors. A gesture
detection component can stop the propagation of the event.
detection component can stop the propagation of the event by its
corresponding event area.
An additional type of event propagation is ``immediate propagation'', which
An additional type of event propagation is ``immediate propagation'', which
indicates propagation of an event from one gesture detection component to
indicates propagation of an event from one gesture detection component to
...
@@ -422,20 +423,6 @@ detection for every new gesture-based application.
...
@@ -422,20 +423,6 @@ detection for every new gesture-based application.
complex gestures, like the writing of a character from the alphabet,
complex gestures, like the writing of a character from the alphabet,
require more advanced detection algorithms.
require more advanced detection algorithms.
A way to detect these complex gestures based on a sequence of input events,
is with the use of machine learning methods, such as the Hidden Markov
Models
\footnote
{
A Hidden Markov Model (HMM) is a statistical model without
a memory, it can be used to detect gestures based on the current input
state alone.
}
used for sign language detection by
\cite
{
conf/gw/RigollKE97
}
. A sequence of input states can be mapped to a
feature vector that is recognized as a particular gesture with a certain
probability. An advantage of using machine learning with respect to an
imperative programming style is that complex gestures can be described
without the use of explicit detection logic. For example, the detection of
the character `A' being written on the screen is difficult to implement
using an imperative programming style, while a trained machine learning
system can produce a match with relative ease.
Sequences of events that are triggered by a multi-touch based surfaces are
Sequences of events that are triggered by a multi-touch based surfaces are
often of a manageable complexity. An imperative programming style is
often of a manageable complexity. An imperative programming style is
sufficient to detect many common gestures, like rotation and dragging. The
sufficient to detect many common gestures, like rotation and dragging. The
...
@@ -447,25 +434,40 @@ detection for every new gesture-based application.
...
@@ -447,25 +434,40 @@ detection for every new gesture-based application.
not managed well, the detection logic is prone to become chaotic and
not managed well, the detection logic is prone to become chaotic and
over-complex.
over-complex.
A way to detect more complex gestures based on a sequence of input events,
is with the use of machine learning methods, such as the Hidden Markov
Models
\footnote
{
A Hidden Markov Model (HMM) is a statistical model without
a memory, it can be used to detect gestures based on the current input
state alone.
}
used for sign language detection by Gerhard Rigoll et al.
\cite
{
conf/gw/RigollKE97
}
. A sequence of input states can be mapped to a
feature vector that is recognized as a particular gesture with a certain
probability. An advantage of using machine learning with respect to an
imperative programming style is that complex gestures can be described
without the use of explicit detection logic, thus reducing code complexity.
For example, the detection of the character `A' being written on the screen
is difficult to implement using an imperative programming style, while a
trained machine learning system can produce a match with relative ease.
To manage complexity and support multiple styles of gesture detection
To manage complexity and support multiple styles of gesture detection
logic, the architecture has adopted the tracker-based design as described
logic, the architecture has adopted the tracker-based design as described
by
\cite
{
win7touch
}
. Different detection components are wrapped in separate
by
Manoj Kumar
\cite
{
win7touch
}
. Different detection components are wrapped
gesture tracking units, or
\emph
{
gesture trackers
}
. The input of a gestur
e
in separate gesture tracking units called
\emph
{
gesture trackers
}
. Th
e
tracker is provided by an event area in the form of events. Each gesture
input of a gesture tracker is provided by an event area in the form of
detection component is wrapped in a gesture tracker with a fixed type of
events. Each gesture detection component is wrapped in a gesture tracker
input and output. Internally, the gesture tracker can adopt any programming
with a fixed type of input and output. Internally, the gesture tracker can
style. A character recognition component can use an HMM, whereas a tap
adopt any programming style. A character recognition component can use an
detection component defines a simple function that compares even
t
HMM, whereas a tap detection component defines a simple function tha
t
coordinates.
co
mpares event co
ordinates.
When a gesture tracker detects a gesture, this gesture is triggered in the
When a gesture tracker detects a gesture, this gesture is triggered in the
corresponding event area. The event area then calls the callbacks which are
corresponding event area. The event area then calls the callbacks which are
bound to the gesture type by the application.
bound to the gesture type by the application.
The use of gesture trackers as small detection units
provide
s extendability
The use of gesture trackers as small detection units
allow
s extendability
of the architecture. A developer can write a custom gesture tracker and
of the architecture. A developer can write a custom gesture tracker and
register it in the architecture. The tracker can use any type of detection
register it in the architecture. The tracker can use any type of detection
logic internally, as long as it translates events to gestures.
logic internally, as long as it translates low-level events to high-level
gestures.
An example of a possible gesture tracker implementation is a
An example of a possible gesture tracker implementation is a
``transformation tracker'' that detects rotation, scaling and translation
``transformation tracker'' that detects rotation, scaling and translation
...
@@ -498,7 +500,10 @@ detection for every new gesture-based application.
...
@@ -498,7 +500,10 @@ detection for every new gesture-based application.
An advantage of a daemon setup is that it can serve multiple applications
An advantage of a daemon setup is that it can serve multiple applications
at the same time. Alternatively, each application that uses gesture
at the same time. Alternatively, each application that uses gesture
interaction would start its own instance of the architecture in a separate
interaction would start its own instance of the architecture in a separate
process, which would be less efficient.
process, which would be less efficient. The network communication layer
also allows the architecture and a client application to run on separate
machines, thus distributing computational load. The other machine may even
use a different operating system.
\section
{
Example usage
}
\section
{
Example usage
}
\label
{
sec:example
}
\label
{
sec:example
}
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment