|
|
@@ -566,6 +566,8 @@ is not advisable to use.
|
|
|
|
|
|
There are a few points open for improvement. These are the following.
|
|
|
|
|
|
+\subsection{Other Local Binary Patterns}
|
|
|
+
|
|
|
We had some good results but of course there are more things to explore.
|
|
|
For instance we did a research on three different patterns. There are more
|
|
|
patterns to try. For instance we only tried (8,3)-, (8,5)- and
|
|
|
@@ -574,10 +576,11 @@ best result, for a wider range of neighbourhoods. We haven proven that the size
|
|
|
and number of points do influence the performance of the classifier, so further
|
|
|
research would be in place.
|
|
|
|
|
|
-One important feature of our framework is that the LBP class can be changed by
|
|
|
-an other technique. This may be a different algorithm than LBP. Also the
|
|
|
-classifier can be changed in an other classifier. By applying these kind of
|
|
|
-changes we can find the best way to recognize licence plates.
|
|
|
+The expectation is that using a larger diameter pattern, but with the same
|
|
|
+amount of points is worth trying. The theory behind that is that when using a
|
|
|
+gaussian blur to reduce noise, the edges are blurred as well. By
|
|
|
+
|
|
|
+\subsection{Context Information}
|
|
|
|
|
|
We don't do assumption when a letter is recognized. For instance Dutch licence
|
|
|
plates exist of three blocks, two digits or two characters. Or for the new
|
|
|
@@ -587,6 +590,8 @@ case when one digit is most likely to follow by a second digit and not a
|
|
|
character. Maybe these assumption can help in future research to achieve a
|
|
|
higher accuracy rate.
|
|
|
|
|
|
+\subsection{Speed up}
|
|
|
+
|
|
|
A possibility to improve the performance speedwise would be to separate the
|
|
|
creation of the Gaussian kernel and the convolution. This way, the kernel can
|
|
|
be cached, which is a big improvement. At this moment, we calculate this kernel
|
|
|
@@ -595,10 +600,11 @@ standard Python function, but we realised too late that there is performance
|
|
|
loss due to this.
|
|
|
|
|
|
Another performance loss was introduced by checking for each pixel if it is
|
|
|
-in the image. This induces a lot of function calls and four conditional checks
|
|
|
-per pixel. A faster method would be to first set a border of black pixels
|
|
|
-around the image, so the inImage function is now done implicitly because it
|
|
|
-simply finds a black pixel if it falls outside the original image borders.
|
|
|
+in the image. This induces one function call and four conditional checks
|
|
|
+per pixel, which costs performance. A faster method would be to first set a
|
|
|
+border of black pixels around the image, so the inImage function is now done
|
|
|
+implicitly because it simply finds a black pixel if it falls outside the
|
|
|
+original image borders.
|
|
|
|
|
|
\section{Conclusion}
|
|
|
|