A Guide to Kernel Exploitation Attacking the Core [Paperback]
465 pág.

A Guide to Kernel Exploitation Attacking the Core [Paperback]


DisciplinaLinux712 materiais1.840 seguidores
Pré-visualização50 páginas
downward, from higher addresses to lower addresses, or
vice versa), what register keeps track of its top address (generally referred to as the
stack pointer), and how procedures interact with it (how local variables are saved,
how parameters are passed, how nested calls are linked together, etc.).
Although the kernel- and user-land stacks are the same in terms of how they
function, there are some slight differences between the two that you should be
aware of. For instance, the kernel stack is usually limited in size (4KB or 8KB is
a common choice on x86 architectures), hence the paradigm of using as few local
variables as possible when doing kernel programming. Also, all processes\u2019 kernel
stacks are part of the same virtual address space (the kernel address space), and
so they start and span over different virtual addresses.
NOTE
Some operating systems, such as Linux, use so-called interrupt stacks. These are per-CPU
stacks that get used each time the kernel has to handle some kind of interrupt (in the Linux
kernel case, external hardware-generated interrupts). This particular stack is used to avoid
putting too much pressure on the kernel stack size in case small (4KB for Linux) kernel
stacks are used.
26 CHAPTER 2 A Taxonomy of Kernel Vulnerabilities
As you can see from this introduction, kernel stack vulnerabilities are not much
different from their user-land counterparts and are usually the consequence of writing
past the boundaries of a stack allocated buffer. This situation can occur as a result of:
\u2022 Using one of the unsafe C functions, such as strcpy() or sprintf(). These
functions keep writing to their destination buffer, regardless of its size, until
a \0 terminating character is found in the source string.
\u2022 An incorrect termination condition in a loop that populates an array. For
example:
#define ARRAY_SIZE 10
void func() {
int array[ARRAY_SIZE];
for (j = 0; j <= ARRAY_SIZE; j++) {
array[j] = some_value;
[\u2026]
}
}
Since array elements go from 0 to ARRAY_SIZE, when we copy some_value
inside array[j] with j == 10 we are actually writing past the buffer limits
and potentially overwriting sensitive memory (e.g., a pointer variable saved
right after our array).
\u2022 Using one of the safe C functions, such as strncpy(), memcpy(), or snprintf(),
and incorrectly calculating the size of the destination buffer. This is usually
the consequence of particular bug classes that affect integer operations,
generally referred to as integer overflows, which we will describe in more
detail in the \u201cInteger Issues\u201d section later in this chapter.
Since the stack plays a critical role in the application binary interface of a spe-
cific architecture, exploiting kernel stack vulnerabilities can be heavily architec-
ture-dependent, as you will see in Chapter 3.
Kernel Heap Vulnerabilities
In Chapter 1, we saw that the kernel implements a virtual memory abstraction,
creating the illusion of a large and independent virtual address space for all the
user-land processes (and, indeed, for itself). The basic unit of memory that the
kernel manages is the physical page frame, which can vary in size but is never
smaller than 4KB. At the same time, the kernel needs to continuously allocate
space for a large variety of small objects and temporary buffers. Using the physi-
cal page allocator for such a task would be extremely inefficient, and would lead
to a lot of fragmentation and wasted space. Moreover, such objects are likely to
have a short lifetime, which would put an extra burden on the physical page
allocator (and the demand paging on disk), sensibly hitting the overall system
performance.
Memory Corruption Vulnerabilities 27
The general approach that most modern operating systems take to solve this
problem is to have a separated kernel-level memory allocator that communicates with
the physical page allocator and is optimized for fast and continuous allocation and
relinquishing of small objects. Different operating systems have different variations
of this type of allocator, and we will discuss the various implementations in Part II of
this book. For now, it\u2019s important to understand the general ideas behind this kind of
object allocator so that you know what kinds of vulnerabilities might affect it.
We said that this allocator is a consumer of the physical page allocator; it asks
for pages, and eventually it returns them. Each page is then divided into a number
of fixed-size chunks (commonly called slabs, from the Slab Allocator designed by
Jeff Bonwick for Sun OS 5.45), and pages containing objects of the same size are
grouped together. This group of pages is usually referred to as a cache.
Although objects can be of virtually any size, power-of-two sizes are generally
used, for efficiency reasons. When some kernel subsystem asks for an object, the
allocator returns a pointer to one of those chunks. The allocator also needs to keep
track of which objects are free (to be able to satisfy the subsequent allocation/free
correctly). It can keep this information as metadata inside the page, or it can keep
the data in some external data structure (e.g., a linked list). Again, for performance
reasons the object memory is usually not cleared at free or allocation time, but spe-
cific functions that do clear the object memory at these times are provided. Recal-
ling our discussion about dead memory, it\u2019s also possible to talk about a dead heap.
Size can be the only discriminator in the creation of different caches; however,
object-specific caches can be created too. In the latter case, frequently used
objects receive a specific cache, and size-based general-purpose caches are avail-
able for all other allocations (e.g., temporary buffers). An example of a frequently
used object is the structure for holding information about each directory entry on
the file system or each socket connection created. Searching for a file on the file
system will quickly consume a lot of directory entry objects and a big Web site
will likely have thousands of open connections.
Whenever such objects receive a specific cache, the size of the chunks will
likely reflect the specific object size; as a result, non-power-of-two sizes will be
used to optimize space. In this case, as well as in the case of in-cache metadata
information, the free space available for chunks might not be divisible by the
chunk size. This \u201cempty\u201d space is used, in some implementations, to color the
cache, making the objects in different pages start at different offsets and, thus,
end on different hardware cache lines (again improving overall performance).
The vulnerabilities that can affect the kernel heap are usually a consequence of
buffer overflows, with the same triggering modalities we described earlier in the
\u201cKernel Stack Vulnerabilities\u201d section (use of unsafe functions, incorrectly termi-
nated loops, incorrect use of safe functions, etc.). The likely outcome of such an
overflow is to overwrite either the contents of the chunk following the overflowed
chunk, or some cache-related metadata (if present), or some random kernel mem-
ory (if the overflow is big enough to span past the boundary of the page the
chunks reside in, or if the chunk is at the end of the cache page).
28 CHAPTER 2 A Taxonomy of Kernel Vulnerabilities
TIP
Nearly all the object allocators present in the operating systems we will evaluate provide
a way to detect this kind of overflow, via a technique that is usually referred to as
redzoning, which consists of placing an arbitrary value at the end of each chunk and
checking if that value was overwritten at the time the object was freed. Similar techniques
are also implemented to detect access to uninitialized or freed memory. All of these
debugging options have an impact on operating system performance and are thus turned
off by default. They can usually be enabled either at runtime (by