Compiler: Difference between revisions
imported>Pat Palmer (added ref for Grace Hopper) |
imported>Pat Palmer No edit summary |
||
Line 1: | Line 1: | ||
In [[computer science]], a '''compiler''' is a [[translation system]] for a [[computer]] [[programming language]]. For example, a compiler might translate a human readable program, called [[source code]], into [[machine code]]. The theory behind compilers is sufficient for translation between any two [[formal language|formal languages]], which are fully specified so there can be no ambiguity, but not for translating between [[natural language|natural languages]], which are much more complex. | In [[computer science]], a '''compiler''' is a [[translation system]] for a [[computer]] [[programming language]]. For example, a compiler might translate a human readable program, called [[source code]], into [[machine code]]. The theory behind compilers is sufficient for translation between any two [[formal language|formal languages]], which are fully specified so there can be no ambiguity, but not for translating between [[natural language|natural languages]], which are much more complex. | ||
The first implementation of a compiler, as well as the very idea for compilers, was created by [[Dr. Grace Murray Hopper]], a Harvard mathematics professor and early programmer of the Mark I computer. Hopper arguably can be credited with inventing the entire field of [[programming languages]]<ref name="Hopper1">{{cite book|url=http://www.amazon.com/Portraits-Silicon-Robert-Slater/dp/0262691310|title="Portraits in Silicon" by Robert Slater, ch. 20, p. 219|publisher=The MIT Press|year=1987}}</ref>. | The first implementation of a compiler, as well as the very idea for compilers, was created by [[Dr. Grace Murray Hopper]], a Harvard mathematics professor and early programmer of the Mark I computer. Hopper, a pioneer along with several other women working on early [[computers]], arguably can be credited with inventing the entire field of [[programming languages]]<ref name="Hopper1">{{cite book|url=http://www.amazon.com/Portraits-Silicon-Robert-Slater/dp/0262691310|title="Portraits in Silicon" by Robert Slater, ch. 20, p. 219|publisher=The MIT Press|year=1987}}</ref>. | ||
=Input and Output= | =Input and Output= |
Revision as of 14:15, 12 May 2007
In computer science, a compiler is a translation system for a computer programming language. For example, a compiler might translate a human readable program, called source code, into machine code. The theory behind compilers is sufficient for translation between any two formal languages, which are fully specified so there can be no ambiguity, but not for translating between natural languages, which are much more complex.
The first implementation of a compiler, as well as the very idea for compilers, was created by Dr. Grace Murray Hopper, a Harvard mathematics professor and early programmer of the Mark I computer. Hopper, a pioneer along with several other women working on early computers, arguably can be credited with inventing the entire field of programming languages[1].
Input and Output
The input to a compiler is a file (or files) containing a program in a source language. The source file is likely to be a human-readable programming language, though it could be any unambiguous representation of an algorithm, such as a flow chart or other representation of a finite state machine. The output of a compiler is a different file containing code in a target language, often a low-level machine language, though it could just as well be another high-level language.
Two levels of compilation
Most modern programming languages perform compilation in two stages, first from the source language to an intermediate language (typically an assembler), and second from the intermediate language to machine code. In so-called managed programming languages such as Java and C#, the second compilation is postponed until right before the program needs to execute, in which case it is called "just-in-time" compilation.
How a compiler translates
The tasks which a compiler must accomplish include the following:
- Lexical Analysis or Scanning, in which the input characters are recognized (parsed), usually by a set of regular expressions, and output as a sequence of tokens.
- Syntactical Analysis or Parsing, in which the input tokens are recognized by a set of pushdown automatons and output a sequence of semantic actions.
- Semantic Analysis, in which each semantic action builds an internal or intermediate representation of the source program, and context sensitive errors (any error that cannot be discriminated by a context-free language) are detected.
- Code Generation, in which the intermediate language is translated a piece at a time to the target language.
In actuality, there may be multiple optimization stages scattered throughout this process. Additionally, most modern compilers repeatedly translate the language from an intermediate representation to a simpler intermediate representation in order to accomodate a wide swath of optimizations that operate on different levels of detail.
Lexical Analysis
During lexical analysis, a set of regular expressions translate the input sequence (generally characters) into an output sequence (called tokens). One popular tool to simplify the creation of lexical analyzers is a software package called lex.
Readers accustomed to programming may benefit from a few examples of errors that can be detected during this phase. A lexical analyzer could detect errors in a single token, for instance a number that has the letter 'y' in it, or a string with a missing end quote.
Syntactic Analysis
During syntactic analysis, an input sequence of tokens is matched against a set of gramatical constructs called productions. As each production is matched, a semantic action routine is called. The role of each semantic action is to build an intermediate representation of the input program, such as a list of variables and functions, and a sequence of instructions comprising each function.
Readers accustomed to programming may benefit from a few examples of errors that can be detected during this phase. A Syntactic analyzer could detect a syntactic error, such as a missing semicolon or curly brace. A syntactic analyzer cannot detect the use of an undeclared variable. This is because the declaration of a variable before its use is a context sensitive langauge requirement, though syntactic analyzers are generally context-free language recognizers.
Semantic Analysis
During semantic analysis, a compiler builds and examines an intermediate representation of the source program and checks it for consistency.
Readers accustomed to programming may benefit from a few examples of errors that can be detected during this phase. A semantic analyzer could detect errors, such as undeclared variables or functions.
Code Generation
- address mode
- application binary interface (ABI
- instruction scheduling
- instruction set architecture
- intermediate representation
- memory hierarchy
- register
- register allocation
- register allocation by graph coloring
- retarget
- stack frame
Optimizations
Optimizations are optional strategies which a compiler may use when emitting output code. Optimizations may be used to improve code execution speed or memory usage, but only if the performance can be improved without sacrificing the correctness of the translation.
- alias analysis
- algebraic simplification
- constant folding
- copy propagation
- dead code elimination
- function inlining
- function specialization
- inlining
- loop optimization
- loop peeling
- loop unrolling
- peephole optimization - analyses the output over a small region (the peephole), searching for localized improvements
- reduction in strength
- tail call optimization
Lists of code generation and optimization techniques
- ↑ (1987) "Portraits in Silicon" by Robert Slater, ch. 20, p. 219. The MIT Press.