Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
L
licenseplates
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Taddeüs Kroes
licenseplates
Commits
88424aaa
Commit
88424aaa
authored
Dec 22, 2011
by
Jayke Meijer
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Completed report.
parent
e6773d31
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
45 additions
and
42 deletions
+45
-42
docs/report.tex
docs/report.tex
+45
-42
No files found.
docs/report.tex
View file @
88424aaa
...
...
@@ -442,14 +442,14 @@ value, and what value we decided on.
The first parameter to decide on, is the
$
\sigma
$
used in the Gaussian blur. To
find this parameter, we tested a few values, by trying them and checking the
results. It turned out that the best value was
$
\sigma
=
1
.
4
$
.
results. It turned out that the best value was
$
\sigma
=
1
.
9
$
.
Theoretically, this can be explained as follows. The filter has width of
$
6
*
\sigma
=
6
*
1
.
6
=
9
.
6
$
pixels. The width of a `stroke' in a character is,
after our resize operations
, around 8 pixels. This means, our filter `matches'
the smallest detail size we want to be able to see, so everything that is
smaller is properly suppressed, yet it retains the details we do want to keep,
being everything that is part of the character.
$
6
*
\sigma
=
6
*
1
.
9
=
11
.
4
$
pixels. The width of a `stroke' in a character is
after our resize operations
around 10 pixels. This means, our filter in
proportion to the smallest detail size we want to be able to see, so everything
that is smaller is properly suppressed, yet it retains the details we do want
to keep,
being everything that is part of the character.
\subsection
{
Parameter
\emph
{
cell size
}}
...
...
@@ -515,33 +515,35 @@ in the table are rounded percentages, for better readability.
c
$
\gamma
$
&
$
2
^{
-
15
}$
&
$
2
^{
-
13
}$
&
$
2
^{
-
11
}$
&
$
2
^{
-
9
}$
&
$
2
^{
-
7
}$
&
$
2
^{
-
5
}$
&
$
2
^{
-
3
}$
&
$
2
^{
-
1
}$
&
$
2
^{
1
}$
&
$
2
^{
3
}$
\\
\hline
$
2
^{
-
5
}$
&
6
1
&
61
&
61
&
61
&
62
&
6
3
&
67
&
74
&
59
&
24
\\
$
2
^{
-
3
}$
&
6
1
&
61
&
61
&
61
&
62
&
63
&
70
&
78
&
60
&
24
\\
$
2
^{
-
1
}$
&
6
1
&
61
&
61
&
61
&
62
&
70
&
83
&
88
&
78
&
27
\\
$
2
^{
1
}$
&
6
1
&
61
&
61
&
61
&
70
&
84
&
90
&
92
&
86
&
45
\\
$
2
^{
3
}$
&
6
1
&
61
&
61
&
70
&
84
&
9
0
&
9
3
&
93
&
86
&
45
\\
$
2
^{
5
}$
&
6
1
&
61
&
70
&
84
&
90
&
9
2
&
93
&
93
&
86
&
45
\\
$
2
^{
7
}$
&
6
1
&
70
&
84
&
90
&
92
&
93
&
93
&
93
&
86
&
45
\\
$
2
^{
9
}$
&
70
&
8
4
&
90
&
92
&
92
&
93
&
93
&
93
&
86
&
45
\\
$
2
^{
11
}$
&
8
4
&
90
&
92
&
92
&
92
&
9
2
&
9
3
&
93
&
86
&
45
\\
$
2
^{
13
}$
&
9
0
&
92
&
92
&
92
&
92
&
9
2
&
9
3
&
93
&
86
&
45
\\
$
2
^{
15
}$
&
9
2
&
92
&
92
&
92
&
92
&
9
2
&
9
3
&
93
&
86
&
45
\\
$
2
^{
-
5
}$
&
6
3
&
63
&
63
&
63
&
63
&
65
&
6
8
&
74
&
59
&
20
\\
$
2
^{
-
3
}$
&
6
3
&
63
&
63
&
63
&
63
&
65
&
70
&
80
&
60
&
20
\\
$
2
^{
-
1
}$
&
6
3
&
63
&
63
&
63
&
63
&
71
&
84
&
89
&
81
&
23
\\
$
2
^{
1
}$
&
6
3
&
63
&
63
&
63
&
70
&
85
&
91
&
92
&
87
&
45
\\
$
2
^{
3
}$
&
6
3
&
63
&
63
&
70
&
85
&
91
&
93
&
93
&
86
&
45
\\
$
2
^{
5
}$
&
6
3
&
63
&
70
&
85
&
91
&
93
&
9
4
&
93
&
86
&
45
\\
$
2
^{
7
}$
&
6
3
&
70
&
85
&
91
&
93
&
93
&
93
&
93
&
86
&
45
\\
$
2
^{
9
}$
&
70
&
8
5
&
91
&
93
&
93
&
93
&
93
&
93
&
86
&
45
\\
$
2
^{
11
}$
&
8
5
&
91
&
93
&
93
&
93
&
93
&
93
&
93
&
86
&
45
\\
$
2
^{
13
}$
&
9
1
&
93
&
93
&
92
&
93
&
93
&
93
&
93
&
86
&
45
\\
$
2
^{
15
}$
&
9
3
&
93
&
92
&
92
&
93
&
93
&
93
&
93
&
86
&
45
\\
\hline
\end{tabular}
\\
The grid-search shows that the best values for these parameters are
$
c
=
2
^
5
=
32
$
and
$
\gamma
=
2
^{
-
3
}
=
0
.
125
$
.
32
$
and
$
\gamma
=
2
^{
-
3
}
=
0
.
125
$
. These values were found for a number of
different blur sizes, so these are the best values for this neighbourhood and
this problem.
\section
{
Results
}
...
...
@@ -558,31 +560,32 @@ According to Wikipedia \cite{wikiplate}, commercial license plate recognition
that are currently on the market software score about
$
90
\%
$
to
$
94
\%
$
, under
optimal conditions and with modern equipment.
Our program scores an average of
$
9
3
.
6
\%
$
. However, this is for a single
Our program scores an average of
$
9
4
.
0
\%
$
. However, this is for a single
character. That means that a full license plate should theoretically
get a score of
$
0
.
9
36
^
6
=
0
.
672
$
, so
$
67
.
2
\%
$
. That is not particularly
get a score of
$
0
.
9
40
^
6
=
0
.
690
$
, so
$
69
.
0
\%
$
. That is not particularly
good compared to the commercial ones. However, our focus was on getting
good scores per character. For us,
$
9
3
.
6
\%
$
is a very satisfying result.
good scores per character. For us,
$
9
4
\%
$
is a very satisfying result.
\subsubsection*
{
Faulty classified characters
}
As we do not have a
$
100
\%
$
score, it is interesting to see what characters are
classified wrong. These characters are shown in appendix
\ref
{
fcc
}
. Most of
these errors are easily explained. For example, some 0's are classified as
'D', some 1's are classified as 'T' and some 'F's are classified as '
E'.
these errors are easily explained. For example, some
`
0's are classified as
`D', some `1's are classified as `T' and some `F's are classified as `
E'.
Of course, these are not as interesting as some of the weird matches. For
example, a
'P' is classified as 7. However, if we look more closely, the 'P' is
standing diagonally, possibly because the datapoints where not very exact in
example, a
`P' is classified as `7'. However, if we look more closely, the `P'
is
standing diagonally, possibly because the datapoints where not very exact in
the XML file. This creates a large diagonal line in the image, which explains
why this can be classified as a
7. The same has happened with a '
T', which is
also marked as
7
.
why this can be classified as a
`7'. The same has happened with a `
T', which is
also marked as
`7'
.
Other strange matches include a
'Z' as a 9
, but this character has a lot of
noise surrounding it, which makes classification harder, and a
3
that is
classified as
9
, where the exact opposite is the case. This plate has no noise,
Other strange matches include a
`Z' as a `9'
, but this character has a lot of
noise surrounding it, which makes classification harder, and a
`3'
that is
classified as
`9'
, where the exact opposite is the case. This plate has no noise,
due to which the background is a large area of equal color. This might cause
the classification to focus more on this than on the actual character.
the classification to focus more on the background than on the actual
character. This happens for more characters, for instance a `5' as `P'.
\subsection
{
Speed
}
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment