Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
L
licenseplates
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Taddeüs Kroes
licenseplates
Commits
8da249e6
Commit
8da249e6
authored
Dec 22, 2011
by
Taddeus Kroes
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Worked on report.
parent
4af16b98
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
23 additions
and
22 deletions
+23
-22
docs/report.tex
docs/report.tex
+23
-22
No files found.
docs/report.tex
View file @
8da249e6
...
...
@@ -652,46 +652,47 @@ were and whether we were able to find a proper solution for them.
\subsubsection*
{
Dataset
}
We
did experience
a number of problems with the provided dataset. A number of
these are
problems to be expected in the real world, but which
make development
harder. Others are more elementa
l
problems.
We
have encountered
a number of problems with the provided dataset. A number of
these are
to be expected in the real world, but they do
make development
harder. Others are more elementa
ry
problems.
The first problem
wa
s that the dataset contains a lot of license plates which
The first problem
i
s that the dataset contains a lot of license plates which
are problematic to read, due to excessive amounts of dirt on them. Of course,
this is something you would encounter in the real situation, but it made it
hard for us to see whether there was a coding error or just a bad example.
Another problem
wa
s that there were license plates of several countries in
Another problem
i
s that there were license plates of several countries in
the dataset. Each of these countries has it own font, which also makes it
hard to identify these plates, unless there are a lot of these plates in the
learning set.
A problem that is more elemental is that some of the characters in the dataset
are not properly classified. This is of course very problematic, both for
training the SVM as for checking the performance. This meant we had to check
each character whether its description was correct.
are not properly classified. This is obviously very problematic, because it
means that we had to manually verify the value of each character.
As final note, we would like to state that a
n, in our eyes,
unrealistic amount
of characters has a
bad
quality, with a lot of dirt, or crooked plates
etc
etera
. Our own experience is that the average license plate is less hard to
As final note, we would like to state that a
seemingly
unrealistic amount
of characters has a
poor
quality, with a lot of dirt, or crooked plates
etc
.
. Our own experience is that the average license plate is less hard to
read. The local binary pattern method has proven to work on this set, and as
such has proven that it performs good in worst-case scenarios, but we would
like to see how it performs on a more realistic dataset.
like to see how it performs on a dataset with a larger amount of readable,
higher-resolution characters.
\subsubsection*
{
SVM
}
\subsubsection*
{
\texttt
{
libsvm
}
}
We also had trouble with the SVM for Python. The standard Python SVM,
libsvm,
had a poor documentation. There was no explanation what so ever on which
parameter had to be what. This made it a lot harder for us to see what went
wrong in the program
.
We also had trouble with the SVM for Python. The standard Python SVM,
\texttt
{
libsvm
}
, had a poor documentation. There was no documentation
whatsoever for a number of functions. This did not improve efficiency during
the process of development
.
\subsection
{
Workload distribution
}
The first two weeks were team based. Basically the LBP algorithm could be
implemented in the first hour, while some talked and someone did the typing.
Some additional 'basics' where created in similar fashion. This ensured that
every team member was up-to-date and could start figuring out which part of the
implementation was most suited to be done by one individually or in a pair.
The first two weeks were very team based. Basically, the LBP algorithm day
implemented in the first day, as result of a collective effort. Some
additional `basic' functions and classes were created in similar fashion. This
ensured that every team member was up-to-date and could start figuring out
which part of the implementation was most suited to be done by one individually
or in a pair.
\subsubsection*
{
Who did what
}
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment