Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
M
multitouch
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Taddeüs Kroes
multitouch
Commits
a318b0d4
Commit
a318b0d4
authored
12 years ago
by
Taddeüs Kroes
Browse files
Options
Downloads
Patches
Plain Diff
Rewrote part of gesture trackers section.
parent
f39c98e4
No related branches found
Branches containing commit
No related tags found
No related merge requests found
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
docs/data/diagrams.tex
+4
-3
4 additions, 3 deletions
docs/data/diagrams.tex
docs/report.tex
+32
-32
32 additions, 32 deletions
docs/report.tex
with
36 additions
and
35 deletions
docs/data/diagrams.tex
+
4
−
3
View file @
a318b0d4
...
...
@@ -145,7 +145,7 @@
\architecture
{
\node
[block, below of=driver]
(eventdriver)
{
Event driver
}
edge[linefrom] node[right, near end]
{
driver-specific messages
}
(driver);
\node
[block, below of=eventdriver]
(area)
{
A
rea tree
}
\node
[block, below of=eventdriver]
(area)
{
Event a
rea tree
}
edge[linefrom] node[right]
{
events
}
(eventdriver);
\node
[block, right of=area, xshift=7em]
(tracker)
{
Gesture trackers
}
edge[linefrom, bend right=10] node[above]
{
events
}
(area)
...
...
@@ -155,8 +155,9 @@
\group
{
eventdriver
}{
eventdriver
}{
tracker
}{
area
}{
Architecture
}
}
\caption
{
Extension of the diagram from figure
\ref
{
fig:areadiagram
}
,
showing the position of gesture trackers in the architecture.
}
\caption
{
Extension of the diagram from figure
\ref
{
fig:areadiagram
}
with gesture trackers. Gesture trackers analyze detect high-level
gestures from low-level events.
}
\label
{
fig:trackerdiagram
}
\end{figure}
}
...
...
This diff is collapsed.
Click to expand it.
docs/report.tex
+
32
−
32
View file @
a318b0d4
...
...
@@ -409,47 +409,47 @@ goal is to test the effectiveness of the design and detect its shortcomings.
\section
{
Detecting gestures from events
}
\label
{
sec:gesture-detection
}
The events that are grouped by areas must be translated to complex gestures
in some way. Gestures such as a button tap or the dragging of an object
using one finger are easy to detect by comparing the positions of
sequential
$
point
\_
down
$
and
$
point
\_
move
$
events.
A way to detect more complex gestures is based on a sequence of input
features is with the use of machine learning methods, such as Hidden Markov
Models
\footnote
{
A Hidden Markov Model (HMM) is a statistical model without
a memory, it can be used to detect gestures based on the current input
state alone.
}
\cite
{
conf/gw/RigollKE97
}
. A sequence of input states can be
mapped to a feature vector that is recognized as a particular gesture with
some probability. This type of gesture recognition is often used in video
processing, where large sets of data have to be processed. Using an
imperative programming style to recognize each possible sign in sign
language detection is near impossible, and certainly not desirable.
The low-level events that are grouped by an event area must be translated
to high-level gestures in some way. Simple gestures, such as a tap or the
dragging of an element using one finger, are easy to detect by comparing
the positions of sequential
$
point
\_
down
$
and
$
point
\_
move
$
events. More
complex gestures, like the writing of a character from the alphabet,
require more advanced detection algorithms.
A way to detect complex gestures based on a sequence of input features
is with the use of machine learning methods, such as Hidden Markov Models
\footnote
{
A Hidden Markov Model (HMM) is a statistical model without a
memory, it can be used to detect gestures based on the current input state
alone.
}
\cite
{
conf/gw/RigollKE97
}
. A sequence of input states can be mapped
to a feature vector that is recognized as a particular gesture with a
certain probability. An advantage of using machine learning with respect to
an imperative programming style is that complex gestures can be described
without the use of explicit detection logic. For example, the detection of
the character `A' being written on the screen is difficult to implement
using an imperative programming style, while a trained machine learning
system can produce a match with relative ease.
Sequences of events that are triggered by a multi-touch based surfaces are
often of a manageable complexity. An imperative programming style is
sufficient to detect many common gestures
. The imperative programming styl
e
is also familiar and understandable for a wide
range of application
developers. Therefore, the a
im is to use this programming style in the
a
rchitecture implementation that is developed during this project
.
sufficient to detect many common gestures
, like rotation and dragging. Th
e
imperative programming style
is also familiar and understandable for a wide
range of application
developers. Therefore, the a
rchitecture should support
a
n imperative style of gesture detection
.
However, the architecture should not be limited to multi-touch surfaces
alone. For example, the architecture should also be fit to be used in an
application that detects hand gestures from video input.
A problem with the imperative programming style is that the detection of
different gestures requires different pieces of detection code. If this is
not managed well, the detection logic is prone to become chaotic and
over-complex.
A problem with the imperative programming style is that the explicit
detection of different gestures requires different gesture detection
components. If these components is not managed well, the detection logic is
prone to become chaotic and over-complex.
To manage complexity and support multiple methods of gesture detection, the
architecture has adopted the tracker-based design as described by
\cite
{
win7touch
}
. Different detection components are wrapped in separate
gesture tracking units, or
\emph
{
gesture trackers
}
The input of a gesture
tracker is provided by an area in the form of events. When a gesture
gesture tracking units, or
\emph
{
gesture trackers
}
.
The input of a gesture
tracker is provided by an
event
area in the form of events. When a gesture
tracker detects a gesture, this gesture is triggered in the corresponding
area. The area then calls the callbacks which are bound to the
gesture
type by the application. Figure
\ref
{
fig:trackerdiagram
}
shows the
position
of gesture trackers in the architecture.
event
area. The
event
area then calls the callbacks which are bound to the
gesture
type by the application. Figure
\ref
{
fig:trackerdiagram
}
shows the
position
of gesture trackers in the architecture.
\trackerdiagram
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment