Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
L
licenseplates
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
This is an archived project. Repository and other project resources are read-only.
Show more breadcrumbs
Taddeüs Kroes
licenseplates
Commits
4af16b98
Commit
4af16b98
authored
13 years ago
by
Taddeus Kroes
Browse files
Options
Downloads
Plain Diff
Merge branch 'master' of github.com:taddeus/licenseplates
parents
0bba6a5d
b851b4a2
No related branches found
Branches containing commit
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
docs/report.tex
+36
-4
36 additions, 4 deletions
docs/report.tex
with
36 additions
and
4 deletions
docs/report.tex
+
36
−
4
View file @
4af16b98
...
...
@@ -204,7 +204,8 @@ In our case the support vector machine uses a radial gauss kernel function. The
\section
{
Implementation
}
In this section we will describe our implementation in more detail, explaining
the choices we made in the process.
the choices we made in the process. We spent a lot of attention on structuring
the code in such a fashion that it can easily be extended.
\subsection
{
Character retrieval
}
...
...
@@ -528,6 +529,26 @@ grid-searches, finding more exact values for $c$ and $\gamma$, more tests
for finding
$
\sigma
$
and more experiments on the size and shape of the
neighbourhoods.
\subsubsection*
{
Faulty classified characters
}
As we do not have a
$
100
\%
$
score, it is interesting to see what characters are
classified wrong. These characters are shown in appendix
\ref
{
faucla
}
. Most of
these errors are easily explained. For example, some 0's are classified as
'D', some 1's are classified as 'T' and some 'F's are classified as 'E'.
Of course, these are not as interesting as some of the weird matches. For
example, a 'P' is classified as 7. However, if we look more closely, the 'P' is
standing diagonal, possibly because the datapoints where not very exact in the
XML file. This creates a large diagonal line in the image, which explains why
this can be classified as a 7. The same has happened with a 'T', which is also
marked as 7.
Other strange matches include a 'Z' as a 9, but this character has a lot of
noise surrounding it, which makes classification harder, and a 3 that is
classified as 9, where the exact opposite is the case. This plate has no noise,
due to which the background is a large area of equal color. This might cause
the classification to focus more on this than on the actual character.
\subsection
{
Speed
}
Recognizing license plates is something that has to be done fast, since there
...
...
@@ -565,7 +586,10 @@ research would be in place.
The expectation is that using a larger diameter pattern, but with the same
amount of points is worth trying. The theory behind that is that when using a
gaussian blur to reduce noise, the edges are blurred as well. By
Gaussian blur to reduce noise, the edges are blurred as well. By taking larger
radius, you look over a larger distance, so the blurry part of the edge is
skipped. By not using more points, there is no penalty in the time needed to
calculate this larger pattern, so there is an accuracy advantage `for free'.
\subsection
{
Context information
}
...
...
@@ -647,6 +671,13 @@ are not properly classified. This is of course very problematic, both for
training the SVM as for checking the performance. This meant we had to check
each character whether its description was correct.
As final note, we would like to state that an, in our eyes, unrealistic amount
of characters has a bad quality, with a lot of dirt, or crooked plates
etcetera. Our own experience is that the average license plate is less hard to
read. The local binary pattern method has proven to work on this set, and as
such has proven that it performs good in worst-case scenarios, but we would
like to see how it performs on a more realistic dataset.
\subsubsection*
{
SVM
}
We also had trouble with the SVM for Python. The standard Python SVM, libsvm,
...
...
@@ -685,12 +716,13 @@ were instantaneous! A crew to remember.
\appendix
\section
{
Faulty Classifications
}
\section
{
Faulty classified characters
}
\label
{
faucla
}
\begin{figure}
[H]
\hspace
{
-2cm
}
\includegraphics
[scale=0.5]
{
faulty.png
}
\caption
{
Faulty classificati
ons of
characters
}
\caption
{
Faulty classificati
ed
characters
}
\end{figure}
\begin{thebibliography}
{
9
}
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment