A Guide to Kernel Exploitation Attacking the Core [Paperback]
465 pág.

A Guide to Kernel Exploitation Attacking the Core [Paperback]


DisciplinaLinux712 materiais1.840 seguidores
Pré-visualização50 páginas
Kernel-land exploits
Execute shellcode The shellcode can execute
kernel system calls via user-
land gates that guarantee
safety and correctness.
The shellcode executes
at a higher privilege level
and has to return to
user land correctly,
without panicking the
system.
Bypass anti-exploitation
protections
This requires increasingly more
complicated approaches.
Most of the protections
are at the kernel level but
do not protect the kernel
itself. The attacker can
even disable most of
them.
An Exploit Writer\u2019s View of the Kernel 13
illusion of multitasking. To achieve that, the kernel saves and associates to each
running process a set of information representing its state: where it is in the
execution process, whether it is active or waiting for some resource, the state of
the machine when it was removed from the CPU, and so on. All this information
is usually referred to as the execution context and the action of taking a process
from the CPU in favor of another one is called context switching. The subsystem
responsible for selecting the next process that will run and for arbitrating the
CPU among the various tasks is the scheduler. As you will learn, being able to
influence the scheduler\u2019s decisions is of great importance when exploiting race
conditions.
In addition to information for correctly performing a context switch, the kernel
keeps track of other process details, such as what files it opened, its security
credentials, and what memory ranges it is using. Being able to successfully locate
the structures that hold these details is usually the first step in kernel shellcode
development. Once you can get to the structure that holds the credentials for the
running process, you can easily raise your privileges/capabilities.
Virtual Memory
Another kernel subsystem any exploit developer needs to be familiar with is the
one providing the virtual memory abstraction to processes and to the kernel itself.
Computers have a fixed amount of physical memory (random access memory or
RAM) that can be used to store temporary, volatile data. The physical address
space range is the set of addresses that goes from 0 to RAM SIZE \u2013 1. At the
same time, modern operating systems provide to each running process and to
various kernel subsystems the illusion of having a large, private address space all
for themselves. This virtual address space is usually larger than the physical
address space and is limited by the architecture: on an n-bit architecture it gener-
ally ranges from 0 to 2n \u2212 1. The virtual memory subsystem is responsible for
keeping this abstraction in place, managing the translation from virtual addresses
to physical addresses (and vice versa) and enforcing the separation between dif-
ferent address spaces. As we said in the previous sections, one of the building
blocks of a secure system is the isolation between the kernel and the processes,
and between the processes themselves. To achieve that, nearly all the operating
systems (and indeed, the ones we will cover in this book) divide the physical
address range in fixed-size chunks called page frames, and the virtual address
range in equally sized chunks called pages. Anytime a process needs to use a
memory page, the virtual memory subsystem allocates a physical frame to it. The
translation from physical frames to virtual pages is done through page tables,
which tell to which specific physical page frame a given virtual address maps.
Once all the page frames have been allocated and a new one is needed, the oper-
ating system picks a page that is not being used and copies it to the disk, in a
dedicated area called swap space, thereby freeing a physical frame that will be
returned to the process. If the evicted page is needed again, the operating system
14 CHAPTER 1 From User-Land to Kernel-Land Attacks
will copy another page to the disk and bring the previous one back in. This
operation is called swapping. Since accessing the hard drive is a slow operation,
to improve performance the virtual memory subsystem first creates a virtual
address range for the process and then assigns a physical page frame only
when that address is referenced for the first time. This approach is known as
demand paging.
TOOLS & TRAPS\u2026
Observing the Virtual Address Space of a Process
We just gave you a primer on what virtual memory is and how it works. To see it in
action you can use some of the tools that your operating system provides you. On Linux
machines, you can execute the command cat /proc/<pid>/maps (where <pid> is the
numeric PID of the process you are interested in) to see a list of all the memory that the
process mapped (i.e., all the virtual address ranges that the process requested). Here
is an example:
luser@katamaran:~$ cat /proc/3184/maps
00400000-004c1000 r-xp 00000000 03:01 703138 /bin/bash
006c1000-006cb000 rw-p 000c1000 03:01 703138 /bin/bash
006cb000-006d0000 rw-p 006cb000 00:00 0
00822000-008e2000 rw-p 00822000 00:00 0 [heap]
7f7ea5627000-7f7ea5632000 r-xp 00000000 03:01 809430
/lib/libnss_files-2.9.so
7f7ea5632000-7f7ea5831000 ---p 0000b000 03:01 809430
/lib/libnss_files-2.9.so
[\u2026]
As you can see, a variety of information is provided, such as the address ranges
(indicated on the left), page protections (rwxp as read/write/execute/private), and the
eventual backing file of the mapping. You can get similar information on nearly all the
operating systems out there. On OpenSolaris you would use the pmap command\u2014for
example, pmap \u2013x <pid>\u2014whereas on Mac OS X you would execute the vmmap command\u2014
for instance, vmmap <pid> or vmmap <procname>, where <procname> is a string that will
be matched against all the processes running on the system. If you are working on Windows,
we suggest that you download the Sysinternals Suite by Mark Russinovich (http://technet.
microsoft.com/en-us/sysinternals/bb842062.aspx), which provides a lot of very useful system
and process analysis tools in addition to vmmap.
Depending on the architecture, there might be more or less hardware support
to implement this process. Leaving the gory details aside for a moment (details
that you can find precisely described in any architecture or operating system
book), the inner core of the CPU needs to address physical memory, while we (as
exploit writers) will nearly always play with virtual memory.
We just said the virtual-to-physical translation is performed by consulting a
particular data structure known as the page table. A different page table is
created for each process, and at each context switch the correct one is loaded.
Since each process has a different page table and thus a different set of pages,
An Exploit Writer\u2019s View of the Kernel 15
it sees a large, contiguous, virtual address space all for itself, and isolation
among processes is enforced. Specific page attributes allow the kernel to pro-
tect its pages from user land, \u201chiding\u201d its presence. Depending on how this is
implemented, you have two possible scenarios: kernel space on behalf of user
space or separated kernel and user address space. We will discuss why this is a
very interesting characteristic from an exploitation point of view in the
next section.
User Space on Top of Kernel Space versus Separated Address Spaces
Due to the user/supervisor page attribute, sitting in user land you see hardly any
of the kernel layout; nor do you know about the addresses at which the kernel
address space is mapped. On the other end, though, it is from user land that
your attack takes off. We just mentioned that two main designs can be
encountered:
\u2022 Kernel space on behalf of user space In this scenario, the virtual address
space is divided into two parts\u2014one private to the kernel and the other
available to the user-land applications. This is achieved by replicating the
kernel page table entries over every process\u2019s page tables. For example, on a
32-bit x86 machine running