Stack frame: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Pat Palmer
(→‎Security and "Stack Smashing" Attacks: reworded stack smashing section)
imported>Pat Palmer
(→‎See Also: added some terms to see also)
Line 84: Line 84:
==See Also==
==See Also==


* [[process]]
* [[linking loader]]
* [[virtual memory]]
* [[thread]]
* [[compiler]]
* [[compiler]]
* [[high-level language]]
* [[high-level language]]
* [[register allocation]]
* [[register allocation]]
* [[memory hierarchy]]
* [[memory management]]
* [[buffer overflow attack]]
* [[buffer overflow attack]]
* [[closure (computer science)]]
* [[closure (computer science)]]

Revision as of 09:19, 20 April 2007

In computer science, a stack frame is a memory management strategy used in some programming languages to create and destroy temporary (automatic) variables. Among other things, use of a stack allows programming languages to allow recursive calling of subroutines. When a program runs, it is called a process, and it starts with one thread (which may create additional threads). Stacks are per thread, and each thread's stack adds a new stack frame whenever a procedure (or function or subroutine or method) is invoked. Each stack frame contains space for actual parameters, local variables, temporary locations, and (in some architectures) information about the calling context such as the memory address of the calling subroutine. When the called procedure finishes executing, its stack frame is removed from the stack, and thread execution resumes back in the calling procedure.

Many architectures provide special hardware and operating system support for stack frames. Although many different programming languages make use of stack frames, not all compilers pack the stack (i.e., organize the contents of each stack frame) in the same way; this may make it difficult to call between different programming languages. For example, the success of the Pascal programming language was handicapped because it used the stack differently than C and C++, making it difficult for Pascal programs to call programs written in C or C++.

The order of packing information in the stack frame may or may not be part of the programming language specification. For example, on Solaris (Unix) systems, the GNU C compiler and the Sun C compiler pack the stack in slightly different ways; typically, it is necessary to compile two programs with the same compiler if they are to link with each other at runtime. Other languages, such as Java (which runs in a special runtime environment independent of the underlying operating system), do specify exactly how the stack contents are organized, and so compiler incompatibities of the sort that C has do not arise in Java.

In a computer operating system, a special program, sometimes called a linking loader, launches programs by creating a process (a set of special data structures by which an operating system tracks the running of a program) which contains a very large virtual memory address space. Each process may contain one or more threads, and each thread will have its own stack. Each stack is typically assigned a fairly large region in the virtual address space of the enclosing process. If a stack grows to fill its currently allocated address space, the operating system may be able to allocate additional chunks of contiguous virtual memory space--if contiguous address space is still unused. But if too many subroutine calls occur (due perhaps to an infinite loop or to recursive calling that continues too long), the stack may not be able to continue to grow and grab more address space. At this point, the operating system will terminate the process with an error.

Principles of Operation

An illustration of the contents of a stack frame. Shown here are two frames (a function that has called another function). Blue arrows represent pointers. Note that parameters and locals can be addressed as FP ± k.


To use a stack frame, a thread keeps two pointers, often called the Stack Pointer (SP), and the Frame (FP) or Base Pointer (BP). SP always points to the "top" of the stack, and FP always points to the "top" of the frame. Additionally, the thread also maintains a program counter (PC) which points to the next instruction to be executed. Then, whenever a function call takes place, the following steps take place in roughly this order:

  1. The caller saves local variables and temporaries, by pushing them onto the stack.
  2. The caller pushes the callee's actual parameters onto the stack.
  3. The caller branches to the callee, pushing PC onto the stack (on most architectures, this is a single instruction called CALL). When on the stack, the saved PC is called the return address.
  4. The callee pushes the value of FP onto the stack.
  5. The callee copies SP to FP.
  6. The callee adjusts SP, creating storage locations for local variables and local temporaries on the stack.

Steps 4--6 above are referred to as the function prologue, since they are the beginning of every function.

Within the body of the callee function, formal parameters and local variables can all be accessed at an address relative to the frame pointer. Because of this, a function may recurse, and automatically create a different storage location for each of its local variables.


Upon exit from the function, those steps are performed in reverse:

  1. The callee restores SP, and in doing so destroys the storage locations reserved for locals and temporaries.
  2. The callee restores FP, and in doing so returns to the previous frame.
  3. The callee branches back to caller by popping PC off of the stack (on most architectures, this is a single instruction called RETURN).
  4. The caller removes the actual parameters from the stack.
  5. The caller resotres local variables and temporaries, by popping them from the stack.

Steps 1--3 are referred to as the function epilogue, since they are at the end of every function.

Depending on the Application Binary Interface (ABI) of the particular architecture and platform, these steps may be performed in slightly different order. For instance, some systems will save temporaries in two groups---those saved by the caller, and those by the callee. Similarly, on some platforms, the program counter may be saved onto a different stack. Many other variants are possible.


Another notable variant of the stack frame is the use of canary values to thwart buffer overflow attacks.

Limitations

Stack frames are not the most efficient way to manage short-lived values, such as formal parameters, local variables, and intermediate computations. On architectures that support it, it is faster to store these in registers than in memory locations. Although some languages in the past (C, notably) allowed the programmer to try to manage whether variables live in regular memory or in a register (dependent on availability of enough registers), in modern computers this many be managed by the processor transparently (unbeknownst to programmers) as part of making processors execute programs rapidly.

Languages with closures

Stack frames assume that the lifetime of variables is delimited by the function which declares them, which is to say that after a function has returned, its local variables no longer exist. This is not the case for languages which support closures. For instance, in the C (programming language),

int *badFunction(void)
{
  int localVar = 4;
  return &localVar;
}

Can produce undefined results, since it returns a pointer to a memory location that will be reused by other functions. After a few function calls, it is unlikely that the return value will point to the integer 4. It may seem like other high-level languages, such as Java (programming language) Java or C#, allow this, but they do not; their local variables are actually pointers to objects, and so one can return the object, but cannot return a pointer to the local variable.

However, some languages, such as Ruby or Standard ML support closures. In Ruby, this example

def goodFcn()
 i = 4
 return Proc { return i }
end

Returns a closure that can access the local variable i, even after goodFcn() has terminated. As a result, languages such as these must employ more memory management infrastructure than a stack frame.

Security and "Stack Smashing" Attacks

Main Article: buffer overflow attack

A stack smashing attack is a special case of a buffer overflow attack. Stack smashing attacks take advantage of the stack frame by assuming two things:

  1. Both program data (variables) and control flow information (saved program counter) are stored on the stack (as in the common x86 architecture), and
  2. The stack grows sequentially in contiguous memory (either from high to low addresses, or vice versa).


It is thus sometimes possible to craft a special input which will trick a poorly written program into voluntarily overwriting its control flow information, thus taking control of the executing program. When excess information is written to sequential addresses in a stack frame, it will eventually reach control flow information on the stack and overwrite it with a new return address, which can then jump to malicious code already written elsewhere in the stack. In this fashion, viruses can bootstrap themselves in a running process. This vulnerability is especially important if a machine executes programs downloaded over a network.

See Also