Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
L
licenseplates
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Taddeüs Kroes
licenseplates
Commits
bda02c1d
Commit
bda02c1d
authored
Dec 21, 2011
by
Taddeus Kroes
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Worked on Classifier section in report.
parent
1848170e
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
18 additions
and
19 deletions
+18
-19
docs/report.tex
docs/report.tex
+18
-19
No files found.
docs/report.tex
View file @
bda02c1d
...
...
@@ -175,8 +175,7 @@ working with just one cell) gives us the best results.
Given the LBP of a character, a Support Vector Machine can be used to classify
the character to a character in a learning set. The SVM uses the concatenation
of the histograms of all cells in an image as a feature vector (in the case we
check the entire image no concatenation has to be done of course. The SVM can
of the histograms of all cells in an image as a feature vector. The SVM can
be trained with a subset of the given dataset called the ``learning set''. Once
trained, the entire classifier can be saved as a Pickle object
\footnote
{
See
\url
{
http://docs.python.org/library/pickle.html
}}
for later usage.
...
...
@@ -195,7 +194,7 @@ stored in XML files. So, the first step is to read these XML files.
\paragraph*
{
XML reader
}
The XML reader will return a
'
license plate' object when given an XML file. The
The XML reader will return a
`
license plate' object when given an XML file. The
licence plate holds a list of, up to six, NormalizedImage characters and from
which country the plate is from. The reader is currently assuming the XML file
and image name are corresponding, since this was the case for the given
...
...
@@ -305,22 +304,21 @@ increasing our performance, so we only have one histogram to feed to the SVM.
\subsection
{
Classification
}
For the classification, we use a standard Python Support Vector Machine,
\texttt
{
libsvm
}
. This is a often used SVM, and should allow us to simply feed
the data from the LBP and Feature Vector steps into the SVM and receive
results.
\\
\\
Using a SVM has two steps. First you have to train the SVM, and then you can
use it to classify data. The training step takes a lot of time, so luckily
\texttt
{
libsvm
}
offers us an opportunity to save a trained SVM. This means,
you do not have to train the SVM every time.
\\
\\
\texttt
{
libsvm
}
. This is an often used SVM, and should allow us to simply feed
data from the LBP and Feature Vector steps into the SVM and receive results.
Using a SVM has two steps. First, the SVM has to be trained, and then it can be
used to classify data. The training step takes a lot of time, but luckily
\texttt
{
libsvm
}
offers us an opportunity to save a trained SVM. This means that
the SVM only has to be changed once.
We have decided to only include a character in the system if the SVM can be
trained with at least 70 examples. This is done automatically, by splitting
the data set in a trainingset and a testset, where the first 70 examples of
a character are added to the training
set, and all the following examples are
added to the testset. Therefore, if there are not enough examples, all
available examples end up in the
trainingset, and non of these characters
end up in the test
set, thus they do not decrease our score. However, if this
trained with at least 70 examples. This is done automatically, by splitting
the
data set in a learning set and a test set, where the first 70 examples of a
character are added to the learning
set, and all the following examples are
added to the test
set. Therefore, if there are not enough examples, all
available examples end up in the
learning set, and non of these characters end
up in the test
set, thus they do not decrease our score. However, if this
character later does get offered to the system, the training is as good as
possible, since it is trained with all available characters.
...
...
@@ -333,7 +331,7 @@ scripts is named here and a description is given on what the script does.
\subsection*
{
\texttt
{
LearningSetGenerator
.py
}}
\subsection*
{
\texttt
{
generate
\_
learning
\_
set
.py
}}
...
...
@@ -348,6 +346,7 @@ scripts is named here and a description is given on what the script does.
\subsection*
{
\texttt
{
run
\_
classifier.py
}}
\section
{
Finding parameters
}
Now that we have a functioning system, we need to tune it to work properly for
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment