|
@@ -652,46 +652,47 @@ were and whether we were able to find a proper solution for them.
|
|
|
|
|
|
|
|
\subsubsection*{Dataset}
|
|
\subsubsection*{Dataset}
|
|
|
|
|
|
|
|
-We did experience a number of problems with the provided dataset. A number of
|
|
|
|
|
-these are problems to be expected in the real world, but which make development
|
|
|
|
|
-harder. Others are more elemental problems.
|
|
|
|
|
|
|
+We have encountered a number of problems with the provided dataset. A number of
|
|
|
|
|
+these are to be expected in the real world, but they do make development
|
|
|
|
|
+harder. Others are more elementary problems.
|
|
|
|
|
|
|
|
-The first problem was that the dataset contains a lot of license plates which
|
|
|
|
|
|
|
+The first problem is that the dataset contains a lot of license plates which
|
|
|
are problematic to read, due to excessive amounts of dirt on them. Of course,
|
|
are problematic to read, due to excessive amounts of dirt on them. Of course,
|
|
|
this is something you would encounter in the real situation, but it made it
|
|
this is something you would encounter in the real situation, but it made it
|
|
|
hard for us to see whether there was a coding error or just a bad example.
|
|
hard for us to see whether there was a coding error or just a bad example.
|
|
|
|
|
|
|
|
-Another problem was that there were license plates of several countries in
|
|
|
|
|
|
|
+Another problem is that there were license plates of several countries in
|
|
|
the dataset. Each of these countries has it own font, which also makes it
|
|
the dataset. Each of these countries has it own font, which also makes it
|
|
|
hard to identify these plates, unless there are a lot of these plates in the
|
|
hard to identify these plates, unless there are a lot of these plates in the
|
|
|
learning set.
|
|
learning set.
|
|
|
|
|
|
|
|
A problem that is more elemental is that some of the characters in the dataset
|
|
A problem that is more elemental is that some of the characters in the dataset
|
|
|
-are not properly classified. This is of course very problematic, both for
|
|
|
|
|
-training the SVM as for checking the performance. This meant we had to check
|
|
|
|
|
-each character whether its description was correct.
|
|
|
|
|
|
|
+are not properly classified. This is obviously very problematic, because it
|
|
|
|
|
+means that we had to manually verify the value of each character.
|
|
|
|
|
|
|
|
-As final note, we would like to state that an, in our eyes, unrealistic amount
|
|
|
|
|
-of characters has a bad quality, with a lot of dirt, or crooked plates
|
|
|
|
|
-etcetera. Our own experience is that the average license plate is less hard to
|
|
|
|
|
|
|
+As final note, we would like to state that a seemingly unrealistic amount
|
|
|
|
|
+of characters has a poor quality, with a lot of dirt, or crooked plates
|
|
|
|
|
+etc.. Our own experience is that the average license plate is less hard to
|
|
|
read. The local binary pattern method has proven to work on this set, and as
|
|
read. The local binary pattern method has proven to work on this set, and as
|
|
|
such has proven that it performs good in worst-case scenarios, but we would
|
|
such has proven that it performs good in worst-case scenarios, but we would
|
|
|
-like to see how it performs on a more realistic dataset.
|
|
|
|
|
|
|
+like to see how it performs on a dataset with a larger amount of readable,
|
|
|
|
|
+higher-resolution characters.
|
|
|
|
|
|
|
|
-\subsubsection*{SVM}
|
|
|
|
|
|
|
+\subsubsection*{\texttt{libsvm}}
|
|
|
|
|
|
|
|
-We also had trouble with the SVM for Python. The standard Python SVM, libsvm,
|
|
|
|
|
-had a poor documentation. There was no explanation what so ever on which
|
|
|
|
|
-parameter had to be what. This made it a lot harder for us to see what went
|
|
|
|
|
-wrong in the program.
|
|
|
|
|
|
|
+We also had trouble with the SVM for Python. The standard Python SVM,
|
|
|
|
|
+\texttt{libsvm}, had a poor documentation. There was no documentation
|
|
|
|
|
+whatsoever for a number of functions. This did not improve efficiency during
|
|
|
|
|
+the process of development.
|
|
|
|
|
|
|
|
\subsection{Workload distribution}
|
|
\subsection{Workload distribution}
|
|
|
|
|
|
|
|
-The first two weeks were team based. Basically the LBP algorithm could be
|
|
|
|
|
-implemented in the first hour, while some talked and someone did the typing.
|
|
|
|
|
-Some additional 'basics' where created in similar fashion. This ensured that
|
|
|
|
|
-every team member was up-to-date and could start figuring out which part of the
|
|
|
|
|
-implementation was most suited to be done by one individually or in a pair.
|
|
|
|
|
|
|
+The first two weeks were very team based. Basically, the LBP algorithm day
|
|
|
|
|
+implemented in the first day, as result of a collective effort. Some
|
|
|
|
|
+additional `basic' functions and classes were created in similar fashion. This
|
|
|
|
|
+ensured that every team member was up-to-date and could start figuring out
|
|
|
|
|
+which part of the implementation was most suited to be done by one individually
|
|
|
|
|
+or in a pair.
|
|
|
|
|
|
|
|
\subsubsection*{Who did what}
|
|
\subsubsection*{Who did what}
|
|
|
|
|
|