Skip to content
Snippets Groups Projects
report.tex 11.4 KiB
Newer Older
Jayke Meijer's avatar
Jayke Meijer committed
\documentclass[10pt,a4paper]{article}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{booktabs}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{subfigure}
\usepackage{float}
\usepackage{hyperref}

\title{Peephole Optimizer}
\author{Jayke Meijer (6049885), Richard Torenvliet (6138861), Tadde\"us Kroes
Jayke Meijer's avatar
Jayke Meijer committed
    (6054129)}
Jayke Meijer's avatar
Jayke Meijer committed

\begin{document}
\maketitle
Jayke Meijer's avatar
Jayke Meijer committed

\section{Introduction}
Jayke Meijer's avatar
Jayke Meijer committed
The goal of the assignment is to implement the optimization stage of the
compiler. To reach this goal the parser and the optimizer part of the compiler
have to be implemented.

The output of the xgcc cross compiler on a C program is our input. The output
of the xgcc cross compiler is in the form of Assembly code, but not optimized.
Our assignment includes a number of C programs. An important part of the
assignment is parsing the data. Parsing the data is done with Lex and Yacc. The
Lexer is a program that finds keywords that meets the regular expression
provided in the Lexer. After the Lexer, the Yaccer takes over. Yacc can turn
the keywords in to an action.

\section{Design}

Richard Torenvliet's avatar
Richard Torenvliet committed
There are two general types of optimizations of the assembly code, global
optimizations and optimizations on a so-called basic block. These optimizations
will be discussed separately

\subsection{Global optimizations}

We only perform one global optimization, which is optimizing branch-jump
statements. The unoptimized Assembly code contains sequences of code of the
following structure:
\begin{verbatim}
$Lx:   ...
\end{verbatim}
This is inefficient, since there is a jump to a label that follows this code.
It would be more efficient to replace the branch statement with a \texttt{bne}
(the opposite case) to the label used in the jump statement. This way the jump
statement can be eliminated, since the next label follows anyway. The same can
of course be done for the opposite case, where a \texttt{bne} is changed into a
\texttt{beq}.

Since this optimization is done between two series of codes with jumps and
labels, we can not perform this code during the basic block optimizations. The
reason for this will become clearer in the following section.

\subsection{Basic Block Optimizations}

Optimizations on basic blocks are a more important part of the optimizer.
First, what is a basic block? A basic block is a sequence of statements
guaranteed to be executed in that order, and that order alone. This is the case
for a piece of code not containing any branches or jumps.

To create a basic block, you need to define what is the leader of a basic
block. We call a statement a leader if it is either a jump/branch statement, or
the target of such a statement. Then a basic block runs from one leader until
the next leader.

There are quite a few optimizations we perform on these basic blocks, so we
will describe the types of optimizations here in stead of each optimization.

\subsubsection*{Standard peephole optimizations}

These are optimizations that simply look for a certain statement or pattern of
statements, and optimize these. For example,
\begin{verbatim}
mov $regA,$regB
instr $regA, $regA,... 
\end{verbatim}
\begin{verbatim}
\end{verbatim}
since the register \texttt{\$regA} gets overwritten by the second instruction
anyway, and the instruction can easily use \texttt{\$regB} in stead of
\texttt{\$regA}. There are a few more of these cases, which are the same as
those described on the practicum page
\footnote{\url{http://staff.science.uva.nl/~andy/compiler/prac.html}} and in
Appendix \ref{opt}.

\subsubsection*{Common subexpression elimination}

A more advanced optimization is common subexpression elimination. This means
that expensive operations as a multiplication or addition are performed only
once and the result is then `copied' into variables where needed.
Richard Torenvliet's avatar
Richard Torenvliet committed
\begin{verbatim}

addu	$2,$4,$3              addu = $t1, $4, $3
...                        mov = $2, $t1
...                   ->   ...
...                        ...
addu	$5,$4,$3              mov = $4, $t1
Richard Torenvliet's avatar
Richard Torenvliet committed

\end{verbatim}

A standard method for doing this is the creation of a DAG or Directed Acyclic
Graph. However, this requires a fairly advanced implementation. Our
implementation is a slightly less fancy, but easier to implement.
We search from the end of the block up for instructions that are eligible for
CSE. If we find one, we check further up in the code for the same instruction,
and add that to a temporary storage list. This is done until the beginning of
the block or until one of the arguments of this expression is assigned. The temporty storage is 
Jayke Meijer's avatar
Jayke Meijer committed
We now add the instruction above the first use, and write the result in a new
variable. Then all occurrences of this expression can be replaced by a move of
from new variable into the original destination variable of the instruction.

Richard Torenvliet's avatar
Richard Torenvliet committed
This is a less efficient method then the dag, but because the basic blocks are
Jayke Meijer's avatar
Jayke Meijer committed
in general not very large and the execution time of the optimizer is not a
primary concern, this is not a big problem.
\subsubsection*{Fold constants}
Constant folding is an optimization where the outcome of arithmetics are calculated at compile time. If a value x is assigned to a certain value, let's say 10, than all next occurences of \texttt{x} are replaced by 10 until a redefinition of x. Arithmetics in Assembly are always preformed between two constants, if this is not the case the calculation is not possible. See the example for a more clear explanation of constant folding(will come). In other words until the current definition of \texttt{x} becomes dead. Therefore reaching definitions analysis is needed.
Richard Torenvliet's avatar
Richard Torenvliet committed
git 
\subsubsection*{Copy propagation}

Copy propagation `unpacks' a move instruction, by replacing its destination
address with its source address in the code following the move instruction.

This is not a direct optimization, but this does allow for a more effective
dead code elimination.

The code of the block is checked linearly. When a move operation is
encountered, the source and destination address of this move are stored. When
a normal operation with a source and a destination address are found, a number
of checks are performed.

The first check is whether the destination address is stored as a destination
address of a move instruction. If so, this move instruction is no longer valid,
so the optimizations can not be done. Otherwise, continue with the second
check.

In the second check, the source address is compared to the destination
addresses of all still valid move operations. If these are the same, in the
current operation the found source address is replaced with the source address
of the move operation.

An example would be the following:
\begin{verbatim}
move $regA, $regB           move $regA, $regB
...                         ...
Code not writing $regA, ->  ...
$regB                       ...
...                         ...
addu $regC, $regA, ...      addu $regC, $regB, ...
\end{verbatim}
This code shows that \texttt{\$regA} is replaced with \texttt{\$regB}. This
way, the move instruction might have become useless, and it will then be
removed by the dead code elimination.

\subsubsection*{Algebraic transformations}

\section{Implementation}

We decided to implement the optimization in Python. We chose this programming
language because Python is an easy language to manipulate strings, work
object-oriented etc.
It turns out that a Lex and Yacc are also available as a Python module,
Jayke Meijer's avatar
Jayke Meijer committed
named PLY(Python Lex-Yacc). This allows us to use one language, Python, instead
of two, i.e. C and Python. Also no debugging is needed in C, only in Python
Jayke Meijer's avatar
Jayke Meijer committed
which makes our assignment more feasible.
The program has three steps, parsing the Assembly code into a datastructure we
can use, the so-called Intermediate Representation, performing optimizations on
this IR and writing the IR back to Assembly.
\subsection{Parsing}
The parsing is done with PLY, which allows us to perform Lex-Yacc tasks in
Python by using a Lex-Yacc like syntax. This way there is no need to combine
languages like we should do otherwise since Lex and Yacc are coupled with C.
The decision was made to not recognize exactly every possible instruction in
the parser, but only if something is for example a command, a comment or a gcc
directive. We then transform per line to an object called a Statement. A
statement has a type, a name and optionally a list of arguments. These
statements together form a statement list, which is placed in another object
called a Block. In the beginning there is one block for the entire program, but
after global optimizations this will be separated in several blocks that are
the basic blocks.
The optimizations are done in two different steps. First the global
optimizations are performed, which are only the optimizations on branch-jump
constructions. This is done repeatedly until there are no more changes.

Richard Torenvliet's avatar
Richard Torenvliet committed
After all possible global optimizations are done, the program is seperated into
basic blocks. The algorithm to do this is described earlier, and means all
jump and branch instructions are called leaders, as are their targets. A basic
block then goes from leader to leader.
After the division in basic blocks, optimizations are performed on each of
these basic blocks. This is also done repeatedly, since some times several
steps can be done to optimize something.
Once all the optimizations have been done, the IR needs to be rewritten into
Assembly code. After this step the xgcc crosscompiler can make binary code from
the generated Assembly code.
The writer expects a list of statements, so first the blocks have to be
concatenated again into a list. After this is done, the list is passed on to
the writer, which writes the instructions back to Assembly and saves the file
so we can let xgcc compile it.
Richard Torenvliet's avatar
Richard Torenvliet committed
\section{Results}
Richard Torenvliet's avatar
Richard Torenvliet committed
\subsection{pi.c}
Richard Torenvliet's avatar
Richard Torenvliet committed
\subsection{acron.c}
Richard Torenvliet's avatar
Richard Torenvliet committed
\subsection{whet.c}
Richard Torenvliet's avatar
Richard Torenvliet committed
\subsection{slalom.c}
Richard Torenvliet's avatar
Richard Torenvliet committed
\subsection{clinpack.c}
\section{List of all optimizations}
\begin{verbatim}
    beq ...,$Lx             bne ...,$Ly
    j $Ly               ->  $Lx:   ...
$Lx:   ...


    bne ...,$Lx             beq ...,$Ly
    j $Ly               ->  $Lx:   ...
$Lx:   ...
\end{verbatim}
Jayke Meijer's avatar
Jayke Meijer committed
\textbf{Standard basic block optimizations}
\begin{verbatim}
mov $regA,$regA         ->  --- // remove it


mov $regA,$regB         ->  instr $regA, $regB,...
instr $regA, $regA,...


instr $regA,...             instr $4,...
mov [$4-$7], $regA      ->  jal XXX
jal  XXX


sw $regA,XXX            ->  sw $regA, XXX
ld $regA,XXX


shift $regA,$regA,0     ->  --- // remove it


add $regA,$regA,X       ->  lw ...,X($regA)
lw ...,0($regA)
\end{verbatim}
\textbf{Advanced basic block optimizations}

\begin{verbatim}
# Common subexpression elimination
addu $regA, $regB, 4        addu $regD, $regB, 4
...                         move $regA, $regD
Code not writing $regB  ->  ...
...                         ...
addu $regC, $regB, 4        move $regC, $regD
Jayke Meijer's avatar
Jayke Meijer committed


# Constant folding

Jayke Meijer's avatar
Jayke Meijer committed
# Copy propagation
move $regA, $regB           move $regA, $regB
...                         ...
Code not writing $regA, ->  ...
$regB                       ...
...                         ...
addu $regC, $regA, ...      addu $regC, $regB, ...


# Algebraic transformations
addu $regA, $regB, 0    ->  move $regA, $regB

subu $regA, $regB, 0    ->  move $regA, $regB

mult $regA, $regB, 1    ->  move $regA, $regB

mult $regA, $regB, 0    ->  li   $regA, 0

mult $regA, $regB, 2    ->  sll  $regA, $regB, 1
\end{verbatim}
Jayke Meijer's avatar
Jayke Meijer committed
\end{document}