Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
M
multitouch
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Taddeüs Kroes
multitouch
Commits
a318b0d4
Commit
a318b0d4
authored
Jun 19, 2012
by
Taddeüs Kroes
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Rewrote part of gesture trackers section.
parent
f39c98e4
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
36 additions
and
35 deletions
+36
-35
docs/data/diagrams.tex
docs/data/diagrams.tex
+4
-3
docs/report.tex
docs/report.tex
+32
-32
No files found.
docs/data/diagrams.tex
View file @
a318b0d4
...
...
@@ -145,7 +145,7 @@
\architecture
{
\node
[block, below of=driver]
(eventdriver)
{
Event driver
}
edge[linefrom] node[right, near end]
{
driver-specific messages
}
(driver);
\node
[block, below of=eventdriver]
(area)
{
A
rea tree
}
\node
[block, below of=eventdriver]
(area)
{
Event a
rea tree
}
edge[linefrom] node[right]
{
events
}
(eventdriver);
\node
[block, right of=area, xshift=7em]
(tracker)
{
Gesture trackers
}
edge[linefrom, bend right=10] node[above]
{
events
}
(area)
...
...
@@ -155,8 +155,9 @@
\group
{
eventdriver
}{
eventdriver
}{
tracker
}{
area
}{
Architecture
}
}
\caption
{
Extension of the diagram from figure
\ref
{
fig:areadiagram
}
,
showing the position of gesture trackers in the architecture.
}
\caption
{
Extension of the diagram from figure
\ref
{
fig:areadiagram
}
with gesture trackers. Gesture trackers analyze detect high-level
gestures from low-level events.
}
\label
{
fig:trackerdiagram
}
\end{figure}
}
...
...
docs/report.tex
View file @
a318b0d4
...
...
@@ -409,47 +409,47 @@ goal is to test the effectiveness of the design and detect its shortcomings.
\section
{
Detecting gestures from events
}
\label
{
sec:gesture-detection
}
The events that are grouped by areas must be translated to complex gestures
in some way. Gestures such as a button tap or the dragging of an object
using one finger are easy to detect by comparing the positions of
sequential
$
point
\_
down
$
and
$
point
\_
move
$
events.
A way to detect more complex gestures is based on a sequence of input
features is with the use of machine learning methods, such as Hidden Markov
Models
\footnote
{
A Hidden Markov Model (HMM) is a statistical model without
a memory, it can be used to detect gestures based on the current input
state alone.
}
\cite
{
conf/gw/RigollKE97
}
. A sequence of input states can be
mapped to a feature vector that is recognized as a particular gesture with
some probability. This type of gesture recognition is often used in video
processing, where large sets of data have to be processed. Using an
imperative programming style to recognize each possible sign in sign
language detection is near impossible, and certainly not desirable.
The low-level events that are grouped by an event area must be translated
to high-level gestures in some way. Simple gestures, such as a tap or the
dragging of an element using one finger, are easy to detect by comparing
the positions of sequential
$
point
\_
down
$
and
$
point
\_
move
$
events. More
complex gestures, like the writing of a character from the alphabet,
require more advanced detection algorithms.
A way to detect complex gestures based on a sequence of input features
is with the use of machine learning methods, such as Hidden Markov Models
\footnote
{
A Hidden Markov Model (HMM) is a statistical model without a
memory, it can be used to detect gestures based on the current input state
alone.
}
\cite
{
conf/gw/RigollKE97
}
. A sequence of input states can be mapped
to a feature vector that is recognized as a particular gesture with a
certain probability. An advantage of using machine learning with respect to
an imperative programming style is that complex gestures can be described
without the use of explicit detection logic. For example, the detection of
the character `A' being written on the screen is difficult to implement
using an imperative programming style, while a trained machine learning
system can produce a match with relative ease.
Sequences of events that are triggered by a multi-touch based surfaces are
often of a manageable complexity. An imperative programming style is
sufficient to detect many common gestures
. The imperative programming styl
e
i
s also familiar and understandable for a wide range of application
developers. Therefore, the aim is to use this programming style in the
a
rchitecture implementation that is developed during this project
.
sufficient to detect many common gestures
, like rotation and dragging. Th
e
i
mperative programming style is also familiar and understandable for a wide
range of application developers. Therefore, the architecture should support
a
n imperative style of gesture detection
.
However, the architecture should not be limited to multi-touch surfaces
alone. For example, the architecture should also be fit to be used in an
application that detects hand gestures from video input.
A problem with the imperative programming style is that the detection of
different gestures requires different pieces of detection code. If this is
not managed well, the detection logic is prone to become chaotic and
over-complex.
A problem with the imperative programming style is that the explicit
detection of different gestures requires different gesture detection
components. If these components is not managed well, the detection logic is
prone to become chaotic and over-complex.
To manage complexity and support multiple methods of gesture detection, the
architecture has adopted the tracker-based design as described by
\cite
{
win7touch
}
. Different detection components are wrapped in separate
gesture tracking units, or
\emph
{
gesture trackers
}
The input of a gesture
tracker is provided by an area in the form of events. When a gesture
gesture tracking units, or
\emph
{
gesture trackers
}
.
The input of a gesture
tracker is provided by an
event
area in the form of events. When a gesture
tracker detects a gesture, this gesture is triggered in the corresponding
area. The area then calls the callbacks which are bound to the gestur
e
type by the application. Figure
\ref
{
fig:trackerdiagram
}
shows the position
of gesture trackers in the architecture.
event area. The event area then calls the callbacks which are bound to th
e
gesture type by the application. Figure
\ref
{
fig:trackerdiagram
}
shows the
position
of gesture trackers in the architecture.
\trackerdiagram
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment