Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
L
licenseplates
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Taddeüs Kroes
licenseplates
Commits
78588d2a
Commit
78588d2a
authored
Dec 21, 2011
by
Taddeus Kroes
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Automatic whitespace fixed by vim...
parent
8473cbe5
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
23 additions
and
23 deletions
+23
-23
docs/report.tex
docs/report.tex
+23
-23
No files found.
docs/report.tex
View file @
78588d2a
...
...
@@ -16,7 +16,7 @@
\section*
{
Project members
}
Gijs van der Voort
\\
Richard Torenvliet
\\
R
a
ichard Torenvliet
\\
Jayke Meijer
\\
Tadde
\"
us Kroes
\\
Fabi
\"
en Tesselaar
...
...
@@ -36,7 +36,7 @@ Reading license plates with a computer is much more difficult. Our dataset
contains photographs of license plates from various angles and distances. This
means that not only do we have to implement a method to read the actual
characters, but given the location of the license plate and each individual
character, we must make sure we transform each character to a standard form.
character, we must make sure we transform each character to a standard form.
Determining what character we are looking at will be done by using Local Binary
Patterns. The main goal of our research is finding out how effective LBP's are
...
...
@@ -57,9 +57,9 @@ In short our program must be able to do the following:
The actual purpose of this project is to check if LBP is capable of recognizing
license plate characters. We knew the LBP implementation would be pretty
simple. Thus an advantage had to be its speed compared with other license plate
simple. Thus an advantage had to be its speed compared with other license plate
recognition implementations, but the uncertainty of whether we could get some
results made us pick Python. We felt Python would not restrict us as much in
results made us pick Python. We felt Python would not restrict us as much in
assigning tasks to each member of the group. In addition, when using the
correct modules to handle images, Python can be decent in speed.
...
...
@@ -140,7 +140,7 @@ The outcome of this operations will be a binary pattern.
\item
Given this pattern, the next step is to divide the pattern in cells. The
amount of cells depends on the quality of the result, so trial and error is in
order. Starting with dividing the pattern in to cells of size 16.
order. Starting with dividing the pattern in to cells of size 16.
\item
Compute a histogram for each cell.
...
...
@@ -154,7 +154,7 @@ order. Starting with dividing the pattern in to cells of size 16.
result is a feature vector of the image.
\item
Feed these vectors to a support vector machine. This will ''learn'' which
vector indicates what vector is which character.
vector indicates what vector is which character.
\end{itemize}
...
...
@@ -184,7 +184,7 @@ choices we made.
\subsection
{
Character retrieval
}
In order to retrieve the characters from the entire image, we need to
perform a perspective transformation. However, to do this, we need to know the
perform a perspective transformation. However, to do this, we need to know the
coordinates of the four corners of each character. For our dataset, this is
stored in XML files. So, the first step is to read these XML files.
...
...
@@ -194,7 +194,7 @@ The XML reader will return a 'license plate' object when given an XML file. The
licence plate holds a list of, up to six, NormalizedImage characters and from
which country the plate is from. The reader is currently assuming the XML file
and image name are corresponding, since this was the case for the given
dataset. This can easily be adjusted if required.
dataset. This can easily be adjusted if required.
To parse the XML file, the minidom module is used. So the XML file can be
treated as a tree, where one can search for certain nodes. In each XML
...
...
@@ -203,7 +203,7 @@ will do is retrieve the current and most up-to-date version of the plate. The
reader will only get results from this version.
Now we are only interested in the individual characters so we can skip the
location of the entire license plate. Each character has
location of the entire license plate. Each character has
a single character value, indicating what someone thought what the letter or
digit was and four coordinates to create a bounding box. If less then four
points have been set the character will not be saved. Else, to make things not
...
...
@@ -213,7 +213,7 @@ it gives some extra freedom when using the data.
When four points have been gathered the data from the actual image is being
requested. For each corner a small margin is added (around 3 pixels) so that no
features will be lost and minimum amounts of new features will be introduced by
noise in the margin.
noise in the margin.
In the next section you can read more about the perspective transformation that
is being done. After the transformation the character can be saved: Converted
...
...
@@ -230,12 +230,12 @@ rectangle.
\subsection
{
Noise reduction
}
The image contains a lot of noise, both from camera errors due to dark noise
etc., as from dirt on the license plate. In this case, noise therefore means
The image contains a lot of noise, both from camera errors due to dark noise
etc., as from dirt on the license plate. In this case, noise therefore means
any unwanted difference in color from the surrounding pixels.
\paragraph*
{
Camera noise and small amounts of dirt
}
The dirt on the license plate can be of different sizes. We can reduce the
The dirt on the license plate can be of different sizes. We can reduce the
smaller amounts of dirt in the same way as we reduce normal noise, by applying
a Gaussian blur to the image. This is the next step in our program.
\\
\\
...
...
@@ -254,7 +254,7 @@ characters will still be conserved in the LBP, even if there is dirt
surrounding the character.
\subsection
{
Creating Local Binary Patterns and feature vector
}
Every pixel is a center pixel and it is also a value to evaluate but not at the
Every pixel is a center pixel and it is also a value to evaluate but not at the
same time. Every pixel is evaluated as shown in the explanation
of the LBP algorithm. There are several neighbourhoods we can evaluate. We have
tried the following neighbourhoods:
...
...
@@ -274,12 +274,12 @@ would add a computational step, thus making the code execute slower. In the
next section we will describe what the best neighbourhood was.
Take an example where the full square can be evaluated, so none of the
neighbours are out of bounds. The first to be checked is the pixel in the left
bottom corner in the square 3 x 3, with coordinate
$
(
x
-
1
, y
-
1
)
$
with
$
g
_
c
$
neighbours are out of bounds. The first to be checked is the pixel in the left
bottom corner in the square 3 x 3, with coordinate
$
(
x
-
1
, y
-
1
)
$
with
$
g
_
c
$
as center pixel that has coordinates
$
(
x, y
)
$
. If the grayscale value of the
neighbour in the left corner is greater than the grayscale
value of the center pixel than return true. Bit-shift the first bit with 7. The
outcome is now 1000000. The second neighbour will be bit-shifted with 6, and so
outcome is now 1000000. The second neighbour will be bit-shifted with 6, and so
on. Until we are at 0. The result is a binary pattern of the local point just
evaluated.
Now only the edge pixels are a problem, but a simple check if the location of
...
...
@@ -346,7 +346,7 @@ scripts is named here and a description is given on what the script does.
\section
{
Finding parameters
}
Now that we have a functioning system, we need to tune it to work properly for
license plates. This means we need to find the parameters. Throughout the
license plates. This means we need to find the parameters. Throughout the
program we have a number of parameters for which no standard choice is
available. These parameters are:
\\
\\
...
...
@@ -371,7 +371,7 @@ The first parameter to decide on, is the $\sigma$ used in the Gaussian blur. To
find this parameter, we tested a few values, by trying them and checking the
results. It turned out that the best value was
$
\sigma
=
1
.
4
$
.
\\
\\
Theoretically, this can be explained as follows. The filter has width of
Theoretically, this can be explained as follows. The filter has width of
$
6
*
\sigma
=
6
*
1
.
4
=
8
.
4
$
pixels. The width of a `stroke' in a character is,
after our resize operations, around 8 pixels. This means, our filter `matches'
the smallest detail size we want to be able to see, so everything that is
...
...
@@ -454,7 +454,7 @@ $2^{-1}$ & 61 & 61 & 61 & 61 & 62 &
92
&
93
&
93
&
86
&
45
\\
$
2
^{
7
}$
&
61
&
70
&
84
&
90
&
92
&
93
&
93
&
93
&
86
&
45
\\
$
2
^{
9
}$
&
70
&
84
&
90
&
92
&
92
&
$
2
^{
9
}$
&
70
&
84
&
90
&
92
&
92
&
93
&
93
&
93
&
86
&
45
\\
$
2
^{
11
}$
&
84
&
90
&
92
&
92
&
92
&
92
&
93
&
93
&
86
&
45
\\
...
...
@@ -492,7 +492,7 @@ good scores per character, and $93\%$ seems to be a fairly good result.\\
\\
Possibilities for improvement of this score would be more extensive
grid-searches, finding more exact values for
$
c
$
and
$
\gamma
$
, more tests
for finding
$
\sigma
$
and more experiments on the size and shape of the
for finding
$
\sigma
$
and more experiments on the size and shape of the
neighbourhoods.
\subsection
{
Speed
}
...
...
@@ -580,7 +580,7 @@ implementation was most suited to be done by one individually or in a pair.
\subsubsection*
{
Who did what
}
Gijs created the basic classes we could use and helped everyone by keeping
track of what was required to be finished and whom was working on what.
track of what was required to be finished and whom was working on what.
Tadde
\"
us and Jayke were mostly working on the SVM and all kinds of tests
whether the histograms were matching, and what parameters had to be used.
Fabi
\"
en created the functions to read and parse the given xml files with
...
...
@@ -606,4 +606,4 @@ were instantaneous! A crew to remember.
\includegraphics
[scale=0.5]
{
faulty.png
}
\caption
{
Faulty classifications of characters
}
\end{figure}
\end{document}
\ No newline at end of file
\end{document}
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment