Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
M
multitouch
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Taddeüs Kroes
multitouch
Commits
bff901c6
Commit
bff901c6
authored
12 years ago
by
Taddeüs Kroes
Browse files
Options
Downloads
Patches
Plain Diff
Some linguistic improvements to report.
parent
cc772060
No related branches found
Branches containing commit
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
docs/report.tex
+32
-32
32 additions, 32 deletions
docs/report.tex
with
32 additions
and
32 deletions
docs/report.tex
+
32
−
32
View file @
bff901c6
...
...
@@ -241,13 +241,15 @@ goal is to test the effectiveness of the design and detect its shortcomings.
\label
{
sec:areas
}
% TODO: in introduction: gestures zijn opgebouwd uit meerdere primitieven
Touch input devices are unaware of the graphical input widgets rendered on
screen and therefore generate events that simply identify the screen
location at which an event takes place. In order to be able to direct a
gesture to a particular widget on screen, an application programmer must
restrict a gesture to the area of the screen covered by that widget. An
important question is if the architecture should offer a solution to this
problem, or leave it to the application developer.
Touch input devices are unaware of the graphical input
widgets
\footnote
{
``Widget'' is a name commonly used to identify an element
of a graphical user interface (GUI).
}
rendered on screen and therefore
generate events that simply identify the screen location at which an event
takes place. In order to be able to direct a gesture to a particular widget
on screen, an application programmer must restrict a gesture to the area of
the screen covered by that widget. An important question is if the
architecture should offer a solution to this problem, or leave it to the
application developer.
The latter case generates a problem when a gesture must be able to occur at
different screen positions at the same time. Consider the example in figure
...
...
@@ -277,30 +279,28 @@ goal is to test the effectiveness of the design and detect its shortcomings.
fingers multiple hands as well, in which case the use of a simple distance
threshold is insufficient. These examples show that gesture detection logic
is hard to implement without knowledge about (the position of) the
widget
\footnote
{
``Widget'' is a name commonly used to identify an element
of a graphical user interface (GUI).
}
that is receiving the gesture.
Therefore, a better solution for the assignment of events to gesture
detection is to make the gesture detection component aware of the locations
of application widgets on the screen. To accomplish this, the architecture
must contain a representation of the screen area covered by a widget. This
leads to the concept of an
\emph
{
area
}
, which represents an area on the
touch surface in which events should be grouped before being delegated to a
form of gesture detection. Examples of simple area implementations are
rectangles and circles. However, area's could also be made to represent
more complex shapes.
An area groups events and assigns them to some piece of gesture detection
logic. This possibly triggers a gesture, which must be handled by the
client application. A common way to handle framework events in an
application is a ``callback'' mechanism: the application developer binds a
function to an event, that is called by the framework when the event
occurs. Because of the familiarity of this concept with developers, the
architecture uses a callback mechanism to handle gestures in an
application. Since an area controls the grouping of events and thus the
occurrence of gestures in an area, gesture handlers for a specific gesture
type are bound to an area. Figure
\ref
{
fig:areadiagram
}
shows the position
of areas in the architecture.
widget that is receiving the gesture.
A better solution for the assignment of events to gesture detection is to
make the gesture detection component aware of the locations of application
widgets on the screen. To accomplish this, the architecture must contain a
representation of the screen area covered by a widget. This leads to the
concept of an
\emph
{
area
}
, which represents an area on the touch surface in
which events should be grouped before being delegated to a form of gesture
detection. Examples of simple area implementations are rectangles and
circles. However, area's could also be made to represent more complex
shapes.
An area groups events and assigns them to gesture detection logic. This
possibly triggers a gesture, which must be handled by the client
application. A common way to handle events in an application is a
``callback'' mechanism: the application developer binds a function to an
event, that is called when the event occurs. Because of the familiarity of
this concept with developers, the architecture uses a callback mechanism to
handle gestures in an application. Since an area controls the grouping of
events and thus the occurrence of gestures in an area, gesture handlers for
a specific gesture type are bound to an area. Figure
\ref
{
fig:areadiagram
}
shows the position of areas in the architecture.
\areadiagram
...
...
@@ -503,7 +503,7 @@ start the GUI main loop in the current thread
\chapter
{
Test applications
}
A reference implementation of the design
is
written in Python. Two test
A reference implementation of the design
has been
written in Python. Two test
applications have been created to test if the design ``works'' in a practical
application, and to detect its flaws. One application is mainly used to test
the gesture tracker implementations. The other program uses areas in a tree,
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment