Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
L
licenseplates
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Taddeüs Kroes
licenseplates
Commits
4af16b98
Commit
4af16b98
authored
Dec 22, 2011
by
Taddeus Kroes
Browse files
Options
Browse Files
Download
Plain Diff
Merge branch 'master' of github.com:taddeus/licenseplates
parents
0bba6a5d
b851b4a2
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
36 additions
and
4 deletions
+36
-4
docs/report.tex
docs/report.tex
+36
-4
No files found.
docs/report.tex
View file @
4af16b98
...
...
@@ -204,7 +204,8 @@ In our case the support vector machine uses a radial gauss kernel function. The
\section
{
Implementation
}
In this section we will describe our implementation in more detail, explaining
the choices we made in the process.
the choices we made in the process. We spent a lot of attention on structuring
the code in such a fashion that it can easily be extended.
\subsection
{
Character retrieval
}
...
...
@@ -528,6 +529,26 @@ grid-searches, finding more exact values for $c$ and $\gamma$, more tests
for finding
$
\sigma
$
and more experiments on the size and shape of the
neighbourhoods.
\subsubsection*
{
Faulty classified characters
}
As we do not have a
$
100
\%
$
score, it is interesting to see what characters are
classified wrong. These characters are shown in appendix
\ref
{
faucla
}
. Most of
these errors are easily explained. For example, some 0's are classified as
'D', some 1's are classified as 'T' and some 'F's are classified as 'E'.
Of course, these are not as interesting as some of the weird matches. For
example, a 'P' is classified as 7. However, if we look more closely, the 'P' is
standing diagonal, possibly because the datapoints where not very exact in the
XML file. This creates a large diagonal line in the image, which explains why
this can be classified as a 7. The same has happened with a 'T', which is also
marked as 7.
Other strange matches include a 'Z' as a 9, but this character has a lot of
noise surrounding it, which makes classification harder, and a 3 that is
classified as 9, where the exact opposite is the case. This plate has no noise,
due to which the background is a large area of equal color. This might cause
the classification to focus more on this than on the actual character.
\subsection
{
Speed
}
Recognizing license plates is something that has to be done fast, since there
...
...
@@ -565,7 +586,10 @@ research would be in place.
The expectation is that using a larger diameter pattern, but with the same
amount of points is worth trying. The theory behind that is that when using a
gaussian blur to reduce noise, the edges are blurred as well. By
Gaussian blur to reduce noise, the edges are blurred as well. By taking larger
radius, you look over a larger distance, so the blurry part of the edge is
skipped. By not using more points, there is no penalty in the time needed to
calculate this larger pattern, so there is an accuracy advantage `for free'.
\subsection
{
Context information
}
...
...
@@ -647,6 +671,13 @@ are not properly classified. This is of course very problematic, both for
training the SVM as for checking the performance. This meant we had to check
each character whether its description was correct.
As final note, we would like to state that an, in our eyes, unrealistic amount
of characters has a bad quality, with a lot of dirt, or crooked plates
etcetera. Our own experience is that the average license plate is less hard to
read. The local binary pattern method has proven to work on this set, and as
such has proven that it performs good in worst-case scenarios, but we would
like to see how it performs on a more realistic dataset.
\subsubsection*
{
SVM
}
We also had trouble with the SVM for Python. The standard Python SVM, libsvm,
...
...
@@ -685,12 +716,13 @@ were instantaneous! A crew to remember.
\appendix
\section
{
Faulty Classifications
}
\section
{
Faulty classified characters
}
\label
{
faucla
}
\begin{figure}
[H]
\hspace
{
-2cm
}
\includegraphics
[scale=0.5]
{
faulty.png
}
\caption
{
Faulty classificati
ons of
characters
}
\caption
{
Faulty classificati
ed
characters
}
\end{figure}
\begin{thebibliography}
{
9
}
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment