Fabien 14 лет назад
Родитель
Сommit
c1d7661866
1 измененных файлов с 30 добавлено и 3 удалено
  1. 30 3
      docs/verslag.tex

+ 30 - 3
docs/verslag.tex

@@ -206,11 +206,38 @@ choices we made.
 In order to retrieve the license plate from the entire image, we need to
 In order to retrieve the license plate from the entire image, we need to
 perform a perspective transformation. However, to do this, we need to know the 
 perform a perspective transformation. However, to do this, we need to know the 
 coordinates of the four corners of the licenseplate. For our dataset, this is
 coordinates of the four corners of the licenseplate. For our dataset, this is
-stored in XML files. So, the first step is to read these XML files.\\
-\\
-\paragraph*{XML reader}
+stored in XML files. So, the first step is to read these XML files.
 
 
+\paragraph*{XML reader}
 
 
+The XML reader will return a 'license plate' object when given an XML file. The
+licence plate holds a list of, up to six, NormalizedImage characters and from which country the
+plate is from. The reader is currently assuming the XML file and image name are corresponding. Since this was the case
+for the given dataset. This can easily be adjusted if required. 
+
+To parse the XML file, the minidom module is used. So the XML file can be
+treated as a tree, where one can search for certain nodes. In each XML
+file it is possible that multiple versions exist, so the first thing the reader will do is
+retrieve the current and most up-to-date version of the plate. The reader will only get results from this
+version.
+
+Now we are only interested in the individual characters so we can skip the location
+of the entire license plate. Each character has 
+a single character value, indicating what someone thought what the letter or digit was and four coordinates to create
+a bounding box. To make things not to complicated a 
+Character class and Point class are used. They
+act pretty much as associative lists, but it gives extra freedom on using the
+data. If less then four points have been set the character will not be saved.
+
+When four points have been gathered the data from the actual image is being requested. 
+For each corner a small margin is added (around 3 pixels) so that no features will be lost and minimum
+amounts of new features will be introduced by noise in the margin. 
+
+In the next section you can read more about the perspective transformation that is being done. After the transformation the
+character can be saved: Converted to grayscale, but nothing further. This was used
+to create a learning set. If it doesn't need to be saved as an actual image it will be converted
+to a NormalizedImage. When these actions have been completed for each character the license
+plate is usable in the rest of the code.
 
 
 \paragraph*{Perspective transformation}
 \paragraph*{Perspective transformation}
 Once we retrieved the cornerpoints of the license plate, we feed those to a
 Once we retrieved the cornerpoints of the license plate, we feed those to a