Buscar

Exadata-Training-Full-Satya

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 3, do total de 191 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 6, do total de 191 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 9, do total de 191 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Prévia do material em texto

Authors: 
 
Satya THIPPANA 
 
 
 
 
 
2 
ENGINEERED SYSTEMS OVERVIEW 
WHAT IS ENGINEERED SYSTEMS: 
With an inclusive "in-a-box" strategy, Oracle's engineered systems combine best-of-breed hardware and 
software components with game-changing technical innovations. Designed, engineered, and tested to 
work together, Oracle's engineered systems cut IT complexity, reduce costly maintenance, integration 
and testing tasks—and power real-time decision making with extreme performance. 
WHY ENGINEERED SYSTEMS: 
In the era of big data and cloud services, data increasingly drives business innovation. But for many 
companies, IT complexity is a serious roadblock, impeding decision maker’s ability to get the information 
they need, when they need it. Not only does it slow corporate agility, but IT complexity stifles 
innovation, as companies are forced to invest in ‘keeping the lights on’ instead of transformative digital 
projects. 
The components of Oracle's engineered systems are preassembled for targeted functionality and then 
optimized—as a complete system—for speed and simplicity. That means everything you're doing gets 
kicked up without adding complexity to IT environments. 
 
 Oracle Exadata Database Machine 
 Oracle Exalogic Elastic Cloud 
 Oracle Exalytics In-Memory Machine 
 Oracle Super Cluster 
 Oracle Private Cloud Appliance 
 Oracle Database Appliance 
 Oracle Big Data Appliance 
 Zero Data Loss Recovery Appliance 
 Oracle FS1 Flash Storage System 
 Oracle ZFS Storage Appliance 
 
THE EXA FAMILY 
 Oracle Exadata Database Machine 
 Oracle Exalogic Elastic Cloud 
 Oracle Exalytics In-Memory Machine 
 
 
 
 
 
3 
EXADATA DATABASE MACHINE INTRODUCTION & OVERVIEW 
EXADATA OVERVIEW 
 Fully integrated platform for Oracle Database 
 Based on Exadata Storage Server storage technology 
 High-performance and high-availability for all Oracle Database workloads 
 Well suited as a database consolidation platform 
 Simple and fast to implement (Pre-configured, Tested and Tuned for Performance) 
 
WHY EXADATA? 
Exadata Database Machine is designed to address common issues: 
 Data Warehousing issues: 
 Supports large, complex queries 
 Managing Multi-Terabyte databases 
 OLTP issues: 
 Supporting large user populations and transaction volumes 
 Delivering quick and consistent response times 
 Consolidation issues: 
 Efficiently supporting mixed workloads(OLTP & OLAP) 
 Prioritizing workloads(IORM) 
 Configuration Issues: 
 Creating a balanced configuration without bottlenecks 
 Building and maintaining a robust system that works 
 Redundant and Fault Tolerant: 
 Failure of any component is tolerated. 
 Data is mirrored across storage servers. 
 
COMPONENTS OF EXADATA DATABASE MACHIENE: 
COMPONENT FULL RACK 1/2 RACK 1/4R RACK 1/8th RACK 
Database Servers 8 4 2 2 
Storage Servers 14 7 3 3 
Infini Band Switches 2 2 2 2 
PDUs 2 2 2 2 
Cisco Admin Switch 1 1 1 1 
 
 
 
4 
 
 
 
 
5 
 
 
 
 
 
 
6 
Database Machine is a fully-integrated Oracle Database platform based on Exadata Storage Server 
storage technology. Database Machine provides a high-performance, highly-available solution for all 
database workloads, ranging from scan-intensive data warehouse applications to highly concurrent 
OLTP applications. 
Using the unique clustering and workload management capabilities of Oracle Database, Database 
Machine is well-suited for consolidating multiple databases onto a single Database Machine. Delivered 
as a complete package of software, servers, and storage, Database Machine is simple and fast to 
implement. 
Database Machine hardware and software components are a series of separately purchasable items. 
Customers can choose from the different hardware configurations that are available. Appropriate 
licensing of Oracle Database and Exadata cell software is also required. In addition, Database Machine is 
highly complementary with clustering and parallel operations, so Oracle Real Application Clusters and 
Oracle Partitioning are highly recommended software options for Database Machine. 
X4-2 DATABASE SERVER OVERVIEW: 
 
 
 
 
X4-2 STORAGE SERVER OVERVIEW: 
 
 
 
 
 
 
7 
EXADATA EVOLUTION:
 
EXADATA FEATURES BY VERSIONS: 
 
EXADATA SCALABILITY: 
 
 
 
 
8 
 
 Scale to eight racks by adding cables 
 Scale from 9 to 36 racks by adding 2 InfiniBand switches 
 Scale to hundreds of storage servers to support multi petabyte databases 
 EXADATA Storage Expansion Racks available in QTR, HALF and FULL Rack configuration. 
 
EXADATA X4-2 CONFIGURATION WORK SHEET 
 
 
 
 
 
9 
EXADATA IO PERFORMANCE WORK SHEET: 
 
EXADATA DATABASE MACHINE SOFTWARE ARCHITECTURE:
 
 
 
 
10 
DATABASE SERVER COMPONENTS: 
 
 Operating System (Oracle Linux x86_64 or Solaris 11 Express for x86) 
 Run Oracle Database 11g Release 2. 
 Automatic Storage Management (ASM) 
 
 LIBCELL ($ORACLE_HOME/lib/libcell11.so) is a special library used to communicate Oracle Database 
with Exadata cells. In combination with the database kernel and ASM, LIBCELL transparently maps 
database I/O operations to Exadata Storage Server enhanced operations. LIBCELL communicates 
with Exadata cells using the Intelligent Database protocol (iDB). iDB is a unique Oracle data transfer 
protocol, built on Reliable Datagram Sockets (RDS), that runs on industry standard InfiniBand 
networking hardware. 
LIBCELL and iDB enable ASM and database instances to utilize Exadata Storage Server features, such 
as Smart Scan and I/O Resource Management. 
 
 Database Resource Manager (DBRM) is integrated with Exadata Storage Server I/O Resource 
Management (IORM). DBRM and IORM work together to ensure that I/O resources are allocated 
based on administrator-defined priorities. 
 
STORAGE SERVER COMPONENTS: 
 
 Cell Server (CELLSRV) is the primary Exadata Storage Server software component and provides the 
majority of Exadata storage services. CELLSRV is a multithreaded server. CELLSRV serves simple 
block requests, such as database buffer cache reads, and Smart Scan requests, such as table scans 
with projections and filters. CELLSRV also implements IORM, which works in conjunction with DBRM 
to meter out I/O bandwidth to the various databases and consumer groups issuing I/Os. Finally, 
CELLSRV collects numerous statistics relating to its operations. Oracle Database and ASM processes 
use LIBCELL to communicate with CELLSRV, and LIBCELL converts I/O requests into messages that 
are sent to CELLSRV using the iDB protocol. 
 
 Management Server (MS) provides Exadata cell management and configuration functions. It works 
in cooperation with the Exadata cell command-line interface (CellCLI). Each cell is individually 
managed with CellCLI. CellCLI can only be used from within a cell to manage that cell; however you 
can run the same CellCLI command remotely on multiple cells with the dcli utility. In addition, MS is 
responsible for sending alerts and collects some statistics in addition to those collected by CELLSRV. 
 
 Restart Server (RS) is used to start up/shut down the CELLSRV and MS services and monitors these 
services to automatically restart them if required. 
 
 
 
 
 
 
 
 
11 
 
DISKMON checks the storage network interface state and cell aliveness, it also performs DBRM plan 
propagation to Exadata cells. 
Diskmon uses a node wide master process (diskmon.bin) and one slave process (DSKM) for each RDBMS 
or ASM instance. The master performs monitoring and propagates state information to the slaves. 
Slaves use the SGA to communicate with RDBMS or ASM processes. If there is a failure in the cluster, 
Diskmon performs I/O fencing to protect data integrity. Cluster Synchronization Services (CSS) still 
decides what to fence. 
Master Diskmon starts with the cluster ware processes. The slave Diskmon processes are background 
processes which are started and stopped in conjunctionwith the associated RDBMS or ASM instance. 
EXADATA KEY FEATURES: 
 
 
 
 
12 
1. SMART SCAN: 
A key advantage of Exadata Storage Server is the ability to offload some database processing to the 
storage servers. With Exadata Storage Server, the database can offload single table scan predicate filters 
and projections, join processing based on bloom filters, along with CPU-intensive decompression and 
decryption operations. This ability is known as Smart Scan. 
In addition to Smart Scan, Exadata Storage Server has other smart storage capabilities including the 
ability to offload incremental backup optimizations, file creation operations, and more. This approach 
yields substantial CPU, memory, and I/O bandwidth savings in the database server which can result in 
massive performance improvements compared with conventional storage. 
 
2. HYBRID COLUMNAR COMPRESSION: 
The Exadata Storage Server provides a very advanced compression capability called Hybrid Columnar 
Compression (HCC) that provides dramatic reductions in storage for large databases. Hybrid Columnar 
Compression enables the highest levels of data compression and provides tremendous cost-savings and 
performance improvements due to reduced I/O, especially for analytic workloads. 
Storage savings is data dependent and often ranges from 5x to 20x. On conventional systems, enabling 
high data compression has the drawback of reducing performance. Because the Exadata Database 
Machine is able to offload decompression overhead into large numbers of processors in Exadata 
storage, most workloads run faster using Hybrid Columnar Compression than they do without it. 
 
 
 
 
13 
 
3. STORAGE INDEXES: 
A storage index is a memory-based structure that holds information about the data inside specified 
regions of physical storage. The storage index keeps track of minimum and maximum column values and 
this information is used to avoid useless I/O. 
Storage Indexes are created automatically and transparently based on the SQL predicate information 
executed by Oracle and passed down to the storage servers from the database servers. 
Storage Indexes are very lightweight and can be created and maintained with no DBA intervention. 
 
 
 
 
14 
4. IORM (I/O Resource Management): 
 
Exadata Storage Server I/O Resource Management (IORM) allows workloads and databases to share I/O 
resources automatically according to user-defined policies. To manage workloads within a database, you 
can define intra-database resource plans using the Database Resource Manager (DBRM), which has 
been enhanced to work in conjunction with Exadata Storage Server. To manage workloads across 
multiple databases, you can define IORM plans. 
For example, if a production database and a test database are sharing an Exadata cell, you can configure 
resource plans that give priority to the production database. In this case, whenever the test database 
load would affect the production database performance, IORM will schedule the I/O requests such that 
the production database I/O performance is not impacted. This means that the test database I/O 
requests are queued until they can be issued without disturbing the production database I/O 
performance. 
 
 
 
 
 
 
 
 
15 
5. SMART FLASH CACHE: 
 
 
Exadata Storage Server makes intelligent use of high-performance flash memory to boost performance. 
The Exadata Smart Flash Cache automatically caches frequently accessed data in PCI flash while keeping 
infrequently accessed data on disk drives. This provides the performance of flash with the capacity and 
low cost of disk. The Exadata Smart Flash Cache understands database workloads and knows when to 
avoid caching data that the database will rarely access or is too big to fit in the cache. 
Each Storage Server (X4-2) has 4 PCI FLASH CARDS and each FLASH CARD has 4 FLASH DISKS with a total 
of 3Tb Flash Memory. Each FLASH DISK is 186 GB. 
6. INFINI BAND NETWORK: 
 
The InfiniBand network provides a reliable high-speed storage network and cluster interconnect for 
Database Machine. It can also be used to provide high performance external connectivity to backup 
servers, ETL servers and middleware servers such as Oracle Exalogic Elastic Cloud. Each database server 
and Exadata Storage Server is connected to the InfiniBand network using a bonded network interface 
(BONDIB0). 
iDB is built on Reliable Datagram Sockets (RDS v3) protocol and runs over InfiniBand ZDP (Zero-loss 
Zero-copy Datagram Protocol). The objective of ZDP is to eliminate unnecessary copying of blocks. RDS 
is based on Socket API with low overhead, low latency, high bandwidth. Exadata Cell Node can 
send/receive large transfer using Remote Direct Memory Access (RDMA). 
http://www.openfabrics.org/
 
 
 
16 
RDMA is a direct memory access from the memory of one server into another server without involving 
either’s operating system. The transfer requires no work to be done by CPUs, caches, or context 
switches, and transfers continue in parallel with other system operations. It is quite useful in massively 
parallel processing environment. 
RDS is highly used on Oracle Exadata. RDS can deliver high available and low overhead of datagrams, 
which is like UDP but more reliable and zero copy. It accesses to InfiniBand via the Socket API. RDS v3 
supports both RDMA read and write and can allow large data transfer up to 8MB. It also supports the 
control messages for asynchronous operation for submit and completion notifications. 
 
If you want to optimize communications between Oracle Engineered System, like Exadata, Exalogic, Big 
Data Appliance, and Exalytics, you can use Sockets Direct Protocol (SDP) networking protocol. SDP only 
deals with stream sockets. 
SDP allows high-performance zero-copy data transfers via RDMA network fabrics and uses a standard 
wire protocol over an RDMA fabric to support stream sockets (SOCK_STREAM). The goal of SDP is to 
provide an RDMA-accelerated alternative to the TCP protocol on IP, at the same time transparent to the 
application. 
It bypasses the OS resident TCP stack for stream connections between any endpoints on the RDMA 
fabric. All other socket types (such as datagram, raw, packet, etc.) are supported by the IP stack and 
operate over standard IP interfaces (i.e., IPoIB on InfiniBand fabrics). The IP stack has no dependency on 
the SDP stack; however, the SDP stack depends on IP drivers for local IP assignments and for IP address 
resolution for endpoint identifications. 
 
7. DBFS (Database File System): 
Oracle Database File System (DBFS) enables an Oracle database to be used as a POSIX (Portable 
Operating System Interface) compatible file system. DBFS is an Oracle Database capability that provides 
Database Machine users with a high performance staging area for bulk data loading and ETL processing. 
 
 
 
 
17 
FEATURES CONSOLIDATION: 
 
EXADATA LAYERS: 
 
 
 
 
 
18 
REDUNDANCY & FAULT TOLERANCE in EXADATA 
FAULT TOLERANCE 
 
POWER SUPPLY FAILURE: 
Redundant PDUs (Can survive with 1 out of 2) 50% 
 
NETWORK FAILURE: 
Redundant IB Switches (Can survive with 1 out of 2) 50% 
& Network BONDING (Can survive with Half the Interfaces DOWN) 50% 
 
STORAGE SERVER FAILURE: 
ASM Redundancy (Can survive with 2 out of 3 STORAGE SERVERS) 33%$ 
 
DISKS FAILURE: 
ASM Redundancy (Can survive with 9 out of 18 DISKS) 50%$ 
 
DB INSTANCE FAILURE: 
RAC (Can survive with One NODE) 50%# 
 
$ Based on ASM NORMAL REDUNDANCY 
# Based on Quarter Rack with 2 DB Nodes. 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19 
DATABASE MACHINE IMPLEMENTATION OVERVIEW 
 
Following are the phases in Exadata Database Machine implementation: 
 
1. Pre-installation 
 Various planning and scheduling activities including: 
 Site planning: space, power, cooling, logistics… 
 Configuration planning: host names, IP addresses, databases… 
 Network preparation:DNS, NTP, cabling… 
 Oracle and customer engineers work together 
 
2. Installation and configuration 
 Hardware and software installation and configuration 
 Result is a working system based on a recommended configuration 
 Recommended to be performed by Oracle engineers 
 
3. Additional configuration 
 Reconfigure storage using non-default settings (If needed) 
 Create additional databases 
 Configure additional Networks (backup / data guard) 
 Configure Enterprise Manager 
 Configure backup and recovery 
 Data Migrations 
 Configure additional Listeners (Data Guard / IB listener) 
 Configuring Oracle Data Guard 
 Configure DBFS 
 Connect Oracle Exalogic Elastic Cloud 
 
4. Post-installation 
 Ongoing monitoring and maintenance 
 
CONFIGURATION ACTIVITIES NOT SUPPORTED WITH DATABASE MACHINE: 
 
 HARDWARE RE-RACKING: Customers sometimes wish to re-rack a Database Machine to comply 
with a data center policy, to achieve earthquake protection or to overcome a physical limitation 
of some sort. Apart from being inherently error-prone, re-racking can cause component 
damage, thermal management issues, cable management issues and other issues. As a result, 
hardware re-racking of Database Machine is not supported. 
 
 ADDING COMPONENTS TO SERVERS: Customers sometimes wish to add components to 
Database Machine. A typical example is the desire to add a Host Bus Adapter (HBA) to the 
database servers so that they can be attached to existing SAN storage. Adding components to 
servers in Database Machine is not supported because of the potential for driver and firmware 
incompatibilities that could undermine the system. 
 
 
 
20 
 ADDING SERVERS TO QUARTER RACK OR HALF RACK CONFIGURATIONS: Outside of a 
supported upgrade, physically adding servers to a Quarter Rack or Half Rack is not supported. 
Because it changes the environmental and power characteristics of the system. It also impacts 
on the future ability to conduct a supported upgrade. Note that customers can add Exadata cells 
to any Database Machine configuration by placing the additional cells in a separate rack and by 
using the spare InfiniBand ports to connect to them. 
 
 SWAPPING LINUX DISTRIBUTIONS: Oracle Linux is provided as the operating system 
underpinning the database servers and Exadata servers in Database Machine. Swapping Linux 
distributions is not supported. 
 
 CONFIGURING ACFS: ASM Cluster File System (ACFS) is currently unavailable on Database 
Machine. 
 
DISK GROUP SIZING: 
 
The backup method information is used to size the ASM disk groups created during installation. 
Specifically it is used to determine the default division of disk space between the DATA disk group and 
the RECO disk group. The backup methods are as follows: 
 
Backups internal to Oracle Exadata Database Machine (40-60): 
This setting indicates that database backups will be created only in the Fast Recovery Area (FRA) located 
in the RECO disk group. This setting allocates 40% of available space to the DATA disk group and 60% of 
available space to the RECO disk group. 
 
Backups external to Oracle Exadata Database Machine (80-20): 
If you are performing backups to disk storage external to Oracle Exadata Database Machine, such as to 
additional dedicated Exadata Storage Servers, ZFS, an NFS server, virtual tape library or tape library, 
then select this option. This setting allocates 80% of available space to the DATA disk group and 20% of 
available space to the RECO disk group. 
 
ASM PROTECTION LEVELS: 
 
The protection level specifies the ASM redundancy settings applied to different disk groups. The setting 
will typically vary depending on numerous factors such as the nature of the databases being hosted on 
the database machine, the size of the databases, the choice of backup method and the availability 
targets. Oracle recommends the use of high redundancy disk groups for mission critical applications. 
 
High for ALL: 
Both the DATA disk group and RECO disk group are configured with Oracle ASM high redundancy (triple 
mirroring). This option is only available if the external backup method is selected. 
 
High for DATA: 
The DATA disk group is configured with Oracle ASM high redundancy, and the RECO disk group is 
configured with Oracle ASM normal redundancy (double mirroring). 
 
 
 
21 
High for RECO: 
The DATA disk group is configured with Oracle ASM normal redundancy, and the RECO disk group is 
configured with Oracle ASM high redundancy. This option is only available if the external backup 
method is selected. 
 
Normal for ALL: 
The DATA Disk Group and RECO disk group are configured with Oracle ASM normal redundancy. 
 
SELECTING OPERATING SYSTEM: 
 
Customers now have a choice for the database server operating system: 
 
 Oracle Linux X86_64 (default) 
 Oracle Solaris 11 for x86 
 
Oracle Exadata Database Machine is shipped with the Linux operating system and Solaris operating 
system for the Oracle Database servers. Linux is the default operating system however customers can 
choose Solaris instead. Servers are shipped from the factory with both operating systems preloaded. 
After the choice of operating system is made, the disks containing the unwanted operating system must 
be reclaimed so that they can be used by the chosen operating system. 
 
 
ONE COMMAND: 
 
The OneCommand utility is used to configure the Oracle Exadata Database Machine software stack 
based on the information the configuration files. The slide lists the current steps (as at August 2011) 
performed by OneCommand. The steps performed by OneCommand as subject to change as the 
installation and configuration process becomes more refined and more automated. 
 
The steps are run sequentially and each step must complete successfully before the next step 
commences. All the steps, or a specified range of steps, can be run using a single command. Steps can 
also be run individually. 
 
If a step fails then in most cases the cause of the failure can be remedied and the process restarted by 
re-running the failed step. Depending on the exact nature of the problem, some failures may require 
additional manual effort to return the Database Machine back to a prefailure state. The README file 
that accompanies OneCommand provides further guidance on these activities, however careful planning 
and execution of all the installation and configuration steps by experienced personnel is a key to avoid 
issues with OneCommand. 
 
Depending on the Database Machine model and capacity, the entire OneCommand process (all steps) 
can take up to about 5 hours to run. 
 
To run OneCommand, change to the /opt/oracle.SupportTools/onecommand directory and run 
deploy112.sh using the -i (install) option. 
 
 
 
22 
DATABASE MACHINE ARCHITECTURE OVERVIEW 
 
Database Machine provides a resilient, high-performance platform for clustered and non-clustered 
implementations of Oracle Database. The fundamental architecture underpinning Database Machine is 
the same core architecture as Oracle Real Application Clusters (RAC) software. Key elements of the 
Database Machine architecture are as below: 
 
SHARED STORAGE: 
 
Database Machine provides intelligent, high-performance shared storage to both single-instance and 
RAC implementations of Oracle Database using Exadata Storage Server technology. Storage supplied by 
Exadata Storage Servers is made available to Oracle databases using the Automatic Storage 
Management (ASM) feature of Oracle Database. ASM adds resilience to Exadata Database Machine 
storage by providing a mirroring scheme which can be used to maintain redundant copies of data on 
separate Exadata Storage Servers. This protects against data loss if a storage server is lost. 
 
STORAGE NETWORK: 
 
Database Machine contains a storage network based on InfiniBand technology. This provides high 
bandwidth and low latency access to the Exadata Storage Servers. Fault tolerance is built into the 
network architecturethrough the use of multiple redundant network switches and network interface 
bonding. 
 
SERVER CLUSTER: 
 
The database servers in Database Machine are designed to be powerful and well balanced so that there 
are no bottlenecks within the server architecture. They are equipped with all of the components 
required for Oracle RAC, enabling customers to easily deploy Oracle RAC across a single Database 
Machine. Where processing requirements exceed the capacity of a single Database Machine, customers 
can join multiple Database Machines together to create a single unified server cluster. 
 
CLUSTER INTERCONNECTS: 
 
The high bandwidth and low latency characteristics of InfiniBand are ideally suited to the requirements 
of the cluster interconnect. Because of this, Database Machine is configured by default to also use the 
InfiniBand storage networks as the cluster interconnect. 
 
SHARED CACHE: 
 
In a RAC environment, the instance buffer caches are shared. If one instance has an item of data in its 
cache that is required by another instance, that data will be shipped to the required node using the 
cluster interconnect. This key attribute of the RAC architecture significantly aids performance because 
the memory-to-memory transfer of information via the cluster interconnect is significantly quicker than 
writing and reading the information using disk. With Database Machine, the shared cache facility uses 
the InfiniBand-based high-performance cluster interconnect. 
 
 
 
23 
ORACLE EXADATA X4-2 NETWORK ARCHITECTURE: 
 
 
 
 
Database Machine contains three network types: 
 
MANAGEMENT (ADMIN) NETWORK (NET0): 
 
Management network is a standard Ethernet/IP network which is used to manage Database Machine. 
The NET0/Management network allows for SSH connectivity to the server. It uses the eth0 interface, 
which is connected to the embedded Cisco switch. The database servers and Exadata Storage Servers 
also provide an Ethernet interface for Integrated Lights-Out Management (ILOM). Using ILOM, 
administrators can remotely monitor and control the state of the server hardware. The InfiniBand 
switches and PDUs also provide Ethernet ports for remote monitoring and management purposes. 
 
CLIENT NETWORK (BONDETH0): 
 
This is also a standard Ethernet network which is primarily used to provide database connectivity via 
Oracle Net software. When Database Machine is initially configured, customers can choose to configure 
the database servers with a single client network interface (NET1) or they can choose to configure a 
bonded network interface (using NET1 and NET2). 
 
The NET1, NET2, NET1-2/Client Access network provides access to the Oracle RAC VIP address and SCAN 
addresses. It uses interfaces eth1 and eth2, which are typically bonded as bondeth0. These interfaces 
are connected to the data center network. When channel bonding is configured for the client access 
network during initial configuration, the Linux bonding module is configured for active-backup mode 
(Mode 1). A bonded client access network interface can provide protection if a network interface fails 
however using bonded interfaces may require additional configuration in the customer’s network. 
 
 
 
24 
 INFINIBAND NETWORK (BONDIB0): 
 
The InfiniBand network provides a reliable high-speed storage network and cluster interconnect for 
Database Machine. It can also be used to provide high performance external connectivity to backup 
servers, ETL servers and middleware servers such as Oracle Exalogic Elastic Cloud. Each database server 
and Exadata Storage Server is connected to the InfiniBand network using a bonded network interface 
(BONDIB0). 
The IB network connects two ports on the Database servers to both of the InfiniBand leaf switches in the 
rack. All storage server communication and Oracle RAC interconnect traffic uses this network. IB 
network is typically bonded on ib0 
 
OPTIONAL NETWORK: 
 
Each X4-2 database server also contains a spare Ethernet port (NET3) which can be used to configure an 
additional client access network. Each X4-2 database server is also equipped with two 10 gigabit 
Ethernet (10 GbE) interfaces which can be used for client connectivity. These interfaces can be bonded 
together or connected to separate networks. Customers must have the required network infrastructure 
for 10 GbE to make use of these interfaces. 
 
 
IP SHEET 
 
 
 
 
 
 
25 
X4-2 DATABASE SERVER OVERVIEW 
 
 2 Twelve-Core Intel Xeon E5-2697 v2 processors (2.7 GHz) 
 256 GB (16 x 16 GB) RAM expandable to 512 GB with memory expansion kit 
 4 x 600 GB 10K RPM SAS disks 
 Disk controller HBA with 512 MB battery-backed write cache, and swappable Battery Backup Unit 
 2 InfiniBand 4X QDR (40 Gb/s) ports (1 dual-port PCIe 3.0 Host Channel Adapter (HCA)) 
 4 x 1 GbE/10GbE Base-T Ethernet ports 
 2 x 10 GbE Ethernet SFP+ ports (1 dual-port 10GbE PCIe 2.0 network card based on the Intel 82599 
10 GbE controller technology) 
 1 Ethernet port for Integrated Lights Out Manager (ILOM) for remote management 
 Oracle Linux 5 Update 10 with Unbreakable Enterprise Kernel 2 or Oracle Solaris 11 Update 1 
 
DATABASE SERVER ARCHITECTURE: 
 
 
 
The following components reside on the database server: 
 Customers can choose between Oracle Linux x86_64 and Solaris 11 Express for x86 as the operating 
system for Exadata Database Machine database servers. 
 
 Exadata Database Machine database servers run Oracle Database 11g Release 2. The precise patch 
release of Oracle Database software must be compatible with the Exadata Storage Server software 
and other Database Machine software components. My Oracle Support bulletin 888828.1 contains 
an up-to-date list of the supported versions for the Database Machine software components. 
 
 
 
26 
 Automatic Storage Management (ASM) is required and provides a file system and volume manager 
optimized for Oracle Database. You can connect multiple separate ASM environments with separate 
disk groups to a pool of Exadata Storage Servers. 
 
 Oracle Database communicates with Exadata cells using a special library called LIBCELL 
($ORACLE_HOME/lib/libcell11.so). In combination with the database kernel and ASM, LIBCELL 
transparently maps database I/O operations to Exadata Storage Server enhanced operations. 
LIBCELL communicates with Exadata cells using the Intelligent Database protocol (iDB). iDB is a 
unique Oracle data transfer protocol, built on Reliable Datagram Sockets (RDS), that runs on 
industry standard InfiniBand networking hardware. LIBCELL and iDB enable ASM and database 
instances to utilize Exadata Storage Server features, such as Smart Scan and I/O Resource 
Management. 
 
 Database Resource Manager (DBRM) is integrated with Exadata Storage Server I/O Resource 
Management (IORM). DBRM and IORM work together to ensure that I/O resources are allocated 
based on administrator-defined priorities. 
 
 
 
The above diagram shows some additional processes that run on the database servers which relate to 
the use of Exadata cells for storage inside Database Machine. 
 
Diskmon checks the storage network interface state and cell aliveness, it also performs DBRM plan 
propagation to Exadata cells. 
 
Diskmon uses a node wide master process (diskmon.bin) and one slave process (DSKM) for each RDBMS 
and ASM instance. The master performs monitoring and propagates state information to the slaves. 
Slaves use the SGA to communicate with RDBMS and ASM processes. 
 
 
 
27 
If there is a failure in the cluster, Diskmon performs I/O fencing to protect data integrity. Cluster 
Synchronization Services (CSS) still decides what to fence. Master Diskmon starts with the Clusterware 
processes. The slave Diskmon processes are background processes which are started and stopped in 
conjunction with the associated RDBMS and ASM instance. 
 
KEY CONFIGURATION FILES: 
 
cellinit.ora: 
 CELL local IP addresses (ib0 interface) 
 Location: /etc/oracle/cell/network-config/cellinit.ora[root@exa01dbadm01 network-config]# cat cellinit.ora 
ipaddress1=192.xx.xx.1/22 
ipaddress2=192.xx.xx.2/22 
 
cellip.ora: 
 Contains the accessible storage cell IP addresses. 
 Location: /etc/oracle/cell/network-config/cellip.ora 
 
[root@exa01dbadm01 network-config]# cat cellip.ora 
cell="192.xx.xx.5;192.xx.xx.6" 
cell="192.xx.xx.7;192.xx.xx.8" 
cell="192.xx.xx.9;192.xx.xx.10" 
 
cellroute.ora: 
 Contains the accessible routes to the storage cells. 
 Location: /etc/oracle/cell/network-config/cellroute.ora 
 
[root@exa01dbadm01 network-config]# cat cellroute.ora 
# Routes for 192.xx.xx.5;192.xx.xx.6 
route="192.xx.xx.5;192.xx.xx.1" 
route="192.xx.xx.6;192.xx.xx.2" 
 
# Routes for 192.xx.xx.7;192.xx.xx.8 
route="192.xx.xx.7;192.xx.xx.1" 
route="192.xx.xx.8;192.xx.xx.2" 
 
# Routes for 192.xx.xx.9;192.xx.xx.10 
route="192.xx.xx.9;192.xx.xx.1" 
route="192.xx.xx.10;192.xx.xx.2" 
 
 
 
 
 
 
 
28 
INFINIBAND NETWORK 
 
 Is the Database Machine interconnect fabric: 
– Provides highest performance available – 40 Gb/sec each direction 
 Is used for storage networking, RAC interconnect and high-performance 
 external connectivity: 
– Less configuration, lower cost, higher performance 
 Looks like normal Ethernet to host software: 
– All IP-based tools work transparently – TCP/IP, UDP, SSH, and so on 
 Has the efficiency of a SAN: 
– Zero copy and buffer reservation capabilities 
 Uses high-performance ZDP InfiniBand protocol (RDS V3): 
– Zero-copy, zero-loss Datagram protocol 
– Open Source software developed by Oracle 
– Very low CPU overhead 
 
InfiniBand is the only storage network supported inside Database Machine because of its performance 
and proven track record in high-performance computing. InfiniBand works like normal Ethernet but is 
much faster. It has the efficiency of a SAN, using zero copy and buffer reservation. Zero copy means that 
data is transferred across the network without intermediate buffer copies in the various network layers. 
Buffer reservation is used so that the hardware knows exactly where to place buffers ahead of time. 
 
Oracle Exadata uses the Intelligent Database protocol (iDB) to transfer data between Database Node 
and Storage Cell Node. It is implemented in the database kernel and transparently maps database 
operations to Exadata operations. iDB can be used to transfer SQL operation from Database Node to Cell 
node, and get query result back or full data blocks back from Cell Node. 
 
iDB is built on Reliable Datagram Sockets (RDS v3) protocol and runs over InfiniBand ZDP (Zero-loss 
Zero-copy Datagram Protocol). The objective of ZDP is to eliminate unnecessary copying of blocks. RDS is 
based on Socket API with low overhead, low latency, high bandwidth. Exadata Cell Node can 
send/receive large transfer using Remote Direct Memory Access (RDMA). 
 
RDMA is a direct memory access from the memory of one server into another server without involving 
either’s operating system. The transfer requires no work to be done by CPUs, caches, or context 
switches, and transfers continue in parallel with other system operations. It is quite useful in massively 
parallel processing environment. 
 
RDS is highly used on Oracle Exadata. RDS can deliver high available and low overhead of datagrams, 
which is like UDP but more reliable and zero copy. It accesses to InfiniBand via the Socket API. RDS v3 
supports both RDMA read and write and can allow large data transfer up to 8MB. It also supports the 
control messages for asynchronous operation for submit and completion notifications. 
 
http://www.openfabrics.org/
http://oss.oracle.com/projects/rds/
 
 
 
29 
 
If you want to optimize communications between Oracle Engineered System, like Exadata, Big Data 
Appliance, and Exalytics, you can use Sockets Direct Protocol (SDP) networking protocol. SDP only deals 
with stream sockets. 
SDP allows high-performance zero-copy data transfers via RDMA network fabrics and uses a standard 
wire protocol over an RDMA fabric to support stream sockets (SOCK_STREAM). The goal of SDP is to 
provide an RDMA-accelerated alternative to the TCP protocol on IP, at the same time transparent to the 
application. 
It bypasses the OS resident TCP stack for stream connections between any endpoints on the RDMA 
fabric. All other socket types (such as datagram, raw, packet, etc.) are supported by the IP stack and 
operate over standard IP interfaces (i.e., IPoIB on InfiniBand fabrics). The IP stack has no dependency on 
the SDP stack; however, the SDP stack depends on IP drivers for local IP assignments and for IP address 
resolution for endpoint identifications. 
 
 
 
 
 
 
30 
INFINIBAND NETWORK OVERVIEW: 
 
 
 
Each Database Machine contains at least two InfiniBand Switches which connect the database servers 
and storage servers. These are called leaf switches. A third switch, called the spine switch, connects to 
both leaf switches. The spine switch facilitates connection of multiple racks to form a single larger 
Database Machine environment. 
 
Each server contains at least one pair of InfiniBand ports which are bonded together using active/active 
bonding. The active and active connections are spread across both leaf switches which doubles the 
throughput. In addition, the leaf switches within a rack are connected to each other. The result is a Fat-
Tree switched fabric network topology. 
 
 
 
31 
EXADATA AND EXALOGIC INTEGRATION: 
 
 
 
POWERING OFF ORACLE EXADATA RACK 
 
The power off sequence for Oracle Exadata Rack is as follows: 
1. Database servers (Oracle Exadata Database Machine only). 
2. Exadata Storage Servers. 
3. Rack, including switches. 
 
POWERING OFF DATABASE SERVERS: 
The following procedure is the recommended shutdown procedure for database servers: 
a. Stop Oracle Clusterware using the following command: 
# GRID_HOME/grid/bin/crsctl stop cluster 
If any resources managed by Oracle Clusterware are still running after issuing the crsctl stop cluster 
command, then the command fails. Use the -f option to unconditionally stop all resources, and stop 
Oracle Clusterware. 
b. Shut down the operating system using the following command: 
# shutdown -h -y now 
 
POWERING OFF EXADATA STORAGE SERVERS: 
Exadata Storage Servers are powered off and restarted using the Linux shutdown command. The 
following command shuts down Exadata Storage Server immediately: 
# shutdown -h -y now 
 
When powering off Exadata Storage Servers, all storage services are automatically stopped. 
(shutdown -r -y now command restarts Exadata Storage Server immediately.) 
 
Note the following when powering off Exadata Storage Servers: 
 All database and Oracle Clusterware processes should be shut down prior to shutting down 
more than one Exadata Storage Server. 
 
 
 
32 
 Powering off one Exadata Storage Server does not affect running database processes or Oracle 
ASM. 
 Powering off or restarting Exadata Storage Servers can impact database availability. 
 The shutdown commands can be used to power off or restart database servers. 
 
POWERING ON AND OFF NETWORK SWITCHES: 
 
The network switches do not have power switches. They powered off when power is removed, by way 
of the power distribution unit (PDU) or at the breaker in the data center. 
 
POWERING ON ORACLE EXADATA RACK 
The power on sequence for Oracle Exadata Rack is as follows (reverse to Power off): 
 
1 Rack, including switches. 
2 Exadata Storage Servers. 
3 Database servers (Oracle Exadata Database Machine only). 
4 START Cluster 
5 Start Database. 
 
PDU (POWER DISTRIBUTION UNIT): 
The Exadata Database Machine has two PDU for the purpose of Redundancy. Each PDU has 3 phase 
power supply. 
 
 
 
 
 
 
33 
 
INTEGRATED LIGHTS OUT MANAGER (ILOM) 
 
 What is it? 
– Integrated service processor hardware and software 
 What does it do? 
– Provides out-of-band server monitoring and management to: 
– Remotely control the power state of a server 
– View the status of sensorsand indicators on the system 
– Provide a remote server console 
– Generates alerts for hardware errors and faults as they occur 
 Where is it found? 
– Exadata Database Machine database servers and Exadata Storage Servers 
 
Oracle Integrated Lights Out Manager (ILOM) provides advanced service processor (SP) hardware and 
software that you can use to manage and monitor your Exadata machine components, such as compute 
nodes, storage nodes, and the InfiniBand switch. ILOM's dedicated hardware and software is 
preinstalled on these components. 
 
ILOM enables you to actively manage and monitor compute nodes in the Exadata machine 
independently of the operating system state, providing you with a reliable Lights Out Management 
system. 
 
With ILOM, you can proactively: 
 Learn about hardware errors and faults as they occur 
 Remotely control the power state of your compute node & cell node. 
 View the graphical and non-graphical consoles for the host 
 View the current status of sensors and indicators on the system 
 Determine the hardware configuration of your system 
 Receive generated alerts about system events in advance via IPMI PETs, SNMP Traps, or E-mail 
Alerts. 
 
 
 
34 
The ILOM service processor (SP) runs its own embedded operating system and has a dedicated Ethernet 
port, which together provides out-of-band management capability. In addition, you can access ILOM 
from the compute node's operating system. Using ILOM, you can remotely manage your compute node 
as if you were using a locally attached keyboard, monitor, and mouse. 
 
ILOM automatically initializes as soon as power is applied to your compute node. It provides a full-
featured, browser-based web interface and has an equivalent command-line interface (CLI). 
Exadata compute nodes are configured at the time of manufacturing to use Sideband Management. This 
configuration eliminates separate cables for the Service Processor (SP) NET MGT port and the NET0 Port. 
 
ILOM Interfaces 
ILOM supports multiple interfaces for accessing its features and functions. You can choose to use a 
browser-based web interface or a command-line interface. 
Web Interface 
The web interface provides an easy-to-use browser interface that enables you to log in to the SP, then to 
perform system management and monitoring. 
Command-Line Interface 
The command-line interface enables you to operate ILOM using keyboard commands and adheres to 
industry-standard DMTF-style CLI and scripting protocols. ILOM supports SSH v2.0 and v3.0 for secure 
access to the CLI. 
 
 
 
 
 
35 
 
 
 
AUTO SERVICE REQUEST (ASR) OVERVIEW: 
 
Auto Service Request (ASR) is designed to automatically open service requests when specific Oracle 
Exadata Rack hardware faults occur. To enable this feature, the Oracle Exadata Rack components must 
be configured to send hardware fault telemetry to the ASR Manager software. This service covers 
components in Exadata Storage Servers and Oracle Database servers, such as disks and flash cards. 
 
ASR Manager must be installed on a server that has connectivity to Oracle Exadata Rack, and an 
outbound Internet connection using HTTPS or an HTTPS proxy. ASR Manager can be deployed on a 
standalone server running Oracle Solaris or Linux, or a database server on Oracle Exadata Database 
Machine. Oracle recommends that ASR Manager be installed on a server outside of Oracle Exadata Rack. 
Once installation is complete, configure fault telemetry destinations for the servers on Oracle Exadata 
Database Machine. 
The following are some of the reasons for the recommendation: 
 If the server that has ASR Manager installed goes down, then ASR Manager is unavailable for the 
other components of Oracle Exadata Database Machine. This is very important when there are 
several Oracle Exadata Database Machines using ASR at a site. 
 In order to submit a service request (SR), the server must be able to access the Internet. 
 
Note: ASR can only use the management network. Ensure the management network is set up to allow 
ASR to run. 
 
When a hardware problem is detected, ASR Manager submits a service request to Oracle Support 
Services. In many cases, Oracle Support Services can begin work on resolving the issue before the 
database administrator is even aware the problem exists. 
 
 
 
36 
Prior to using ASR, the following must be set up: 
 Oracle Premier Support for Systems or Oracle/Sun Limited Warranty 
 Technical contact responsible for Oracle Exadata Rack 
 Valid shipping address for Oracle Exadata Rack parts 
 
An e-mail message is sent to the technical contact for the activated asset to notify the creation of the 
service request. The following are examples of the disk failure Simple Network Management Protocol 
(SNMP) traps sent to ASR Manager. 
 
NOTES: 
 ASR is applicable only for component faults. Not all component failures are covered, though the 
most common components such as disk, fan, and power supplies are covered. 
 
 ASR is not a replacement for other monitoring mechanisms, such as SMTP, and SNMP alerts, within 
the customer data center. ASR is a complementary mechanism that expedites and simplifies the 
delivery of replacement hardware. ASR should not be used for downtime events in high-priority 
systems. For high-priority events, contact Oracle Support Services directly. 
 
 There are occasions when a service request may not be automatically filed. This can happen because 
of the unreliable nature of the SNMP protocol or loss of connectivity to the ASR Manager. Oracle 
recommends that customers continue to monitor their systems for faults, and call Oracle Support 
Services if they do not receive notice that a service request has been automatically filed. 
 
 
 
 
 
 
 
37 
STORAGE SERVER OVERVIEW 
 
 
 
 
 
38 
 
 
Exadata Storage Server is highly optimized storage for use with Oracle Database. It delivers outstanding 
I/O and SQL processing performance for data warehousing and OLTP applications. 
Each Exadata Storage Server is based on a 64 bit Intel-based Sun Fire server. Oracle provides the storage 
server software to impart database intelligence to the storage, and tight integration with Oracle 
database and its features. Each Exadata Storage Server is shipped with all the hardware and software 
components preinstalled including the Exadata Storage Server software, Oracle Linux x86_64 operating 
system and InfiniBand protocol drivers. 
Exadata Storage Server is only available for use in conjunction with Database Machine. Individual 
Exadata Storage Servers can be purchased, however they must be connected to a Database Machine. 
Custom configurations using Exadata Storage Servers are not supported for new installations. 
 
 
 
 
 
39 
EXADATA STORAGE SERVER ARCHITECTURE OVERVIEW: 
 
 Exadata Storage Server is a self-contained storage platform that houses disk storage and runs the 
Exadata Storage Server software provided by Oracle. 
 Exadata Storage Server is also called a cell. A cell is the building block for a storage grid. 
 More cells provide greater capacity and I/O bandwidth. 
 Databases are typically deployed across multiple cells, and multiple databases can share the storage 
provided by a single cell. 
 The databases and cells communicate with each other via a high-performance InfiniBand network. 
 Each cell is a purely dedicated storage platform for Oracle Database files although you can use 
Database File System (DBFS), a feature of Oracle Database, to store your business files inside the 
database. 
 Each cell is a computer with CPUs, memory, a bus, disks, network adapters, and the other 
components normally found in a server. 
 It also runs an operating system (OS), which in the case of Exadata Storage Server is Linux x86_64. 
 The Oracle provided software resident in the Exadata cell runs under this operating system. 
 The OS is accessible in a restricted mode to administer and manage Exadata StorageServer. 
 CAN NOT INSTALL ANY SOFTWARE OR MAKE ANY CHANGES. 
 
 
 
 
40 
PROCESSORS: 
 
MEMORY: 
 
 
STORAGE: 
 
CellCLI> list celldisk attributes name, disktype, size 
 CD_00_exa01celadm01 HardDisk 3691.484375G 
 CD_01_exa01celadm01 HardDisk 3691.484375G 
 CD_02_exa01celadm01 HardDisk 3725.28125G 
 CD_03_exa01celadm01 HardDisk 3725.28125G 
 CD_04_exa01celadm01 HardDisk 3725.28125G 
 CD_05_exa01celadm01 HardDisk 3725.28125G 
 
 FD_00_exa01celadm01 FlashDisk 186.25G 
 FD_01_exa01celadm01 FlashDisk 186.25G 
 FD_02_exa01celadm01 FlashDisk 186.25G 
 FD_03_exa01celadm01 FlashDisk 186.25G 
 FD_04_exa01celadm01 FlashDisk 186.25G 
 FD_05_exa01celadm01 FlashDisk 186.25G 
 FD_06_exa01celadm01 FlashDisk 186.25G 
 FD_07_exa01celadm01 FlashDisk 186.25G 
 
 
 
41 
DATABASE MACHINE SOFTWARE ARCHITECTURE DETAILS: 
 
 
THE CELL PROCESSES: 
 
1. Cell Server (CELLSRV): 
 CELLSRV communicates with LIBCELL. LIBCELL converts I/O requests into messages for data 
along with metadata to CELLSRV using the iDB protocol. 
 CELLSRV is a multithreaded server. 
 CELLSRV is able to use the metadata to process the data before sending results back to the 
database layer. 
 CELLSRV serves simple block requests, such as database buffer cache reads, and Smart Scan 
requests, such as table scans with projections and filters. 
 CELLSRV serves oracle blocks when SQL Offload is not possible. 
 CELLSRV also implements IORM, which works in conjunction with DBRM. 
 CELLSRV collects numerous statistics relating to its operations. 
 
2. Management Server (MS): 
 MS provides Exadata cell management, configuration and administration functions. 
 MS works in cooperation with the Exadata cell command-line interface (CellCLI). 
 MS is responsible for sending alerts and collects some statistics in addition to those collected by 
CELLSRV. 
 
3. Restart Server (RS): 
 RS is used to start up/shut down the CELLSRV and MS services and monitors these services to 
automatically restart them if required. 
 
 
 
42 
Cellrssrm: The cellrssrm process is the main Restart Server process. It launches 3 helper processes: 
cellrsomt, cellrsbmt and cellrsmmt 
 
Cellrsomt: ultimately responsible for launching cellsrv. 
Cellrsbmt and cellrsmmt: Responsible for launching cellrsbkm and the main MS Java process. 
Cellrssmt is called by cellrsbkm, and its ultimate goal is to ensure cell configuration files are valid, 
consistent, and backed up. 
 
[root@exa1celadm02 bin]# ps -ef | grep -i cellrs 
root 13252 1 0 Mar17 ? 00:11:25 /opt/oracle/cell/cellsrv/bin/cellrssrm -ms 1 -cellsrv 1 
root 13259 13252 0 Mar17 ? 00:06:37 /opt/oracle/cell/cellsrv/bin/cellrsbmt -rs_conf 
root 13260 13252 0 Mar17 ? 00:03:58 /opt/oracle/cell/cellsrv/bin/cellrsmmt -rs_conf 
root 27059 13252 0 Mar26 ? 00:32:26 /opt/oracle/cell/cellsrv/bin/cellrsomt -rs_conf 
root 13262 13259 0 Mar17 ? 00:01:47 /opt/oracle/cell/cellsrv/bin/cellrsbkm -rs_conf 
root 13269 13262 0 Mar17 ? 00:06:27 /opt/oracle/cell/cellsrv/bin/cellrssmt -rs_conf 
 
 
[root@exa01celadm02 ~]# ps -ef | grep -i ms | egrep java 
root 1588 13260 0 Aug04 ? 10:32:55 /usr/java/default/bin/java -Xms256m -Xmx512m -XX:- 
 
[root@exa01celadm02 ~]# ps -ef | grep -i 13260 
root 1588 13260 0 Aug04 ? 10:32:55 /usr/java/default/bin/java -Xms256m -Xmx512m -XX:- 
root 13260 13252 0 Mar17 ? 00:04:44 /opt/oracle/cell/cellsrv/bin/cellrsmmt -rs_conf 
 
EXAMPLE: 
 
[root@exa01celadm01 trace]# tail -50f /var/log/oracle/diag/asm/cell/localhost/trace/alert.log 
Thu Oct 15 11:05:49 2015 
Errors in file /opt/oracle/cell/log/diag/asm/cell/exa01celadm01/trace/svtrc_9016_22.trc (incident=25): 
ORA-00600: internal error code, arguments: [ossdebugdisk:cellsrvstatIoctl_missingstat], [228], 
[Database group composite metric], [], [], [], [], [], [], [], [], [] 
Incident details in: 
/opt/oracle/cell/log/diag/asm/cell/dm01celadm01/incident/incdir_25/svtrc_9016_22_i25.trc 
Sweep [inc][25]: completed 
Thu Oct 15 11:05:52 2015 876 msec State dump completed for CELLSRV<9016> after ORA-600 occurred 
CELLSRV error - ORA-600 internal error 
Thu Oct 15 11:05:53 2015 
[RS] monitoring process /opt/oracle/cell/cellsrv/bin/cellrsomt (pid: 9014) returned with error: 128 
Thu Oct 15 11:05:53 2015 
[RS] Started monitoring process /opt/oracle/cell/cellsrv/bin/cellrsomt with pid 23594 
Thu Oct 15 11:05:53 2015 
CELLSRV process id=23596 
 
 
 
 
 
43 
CELLSRVSTAT: 
 
Cellsrvstat is very useful utility to get the cell level statistics for all the logical components of cell like 
memory, io, smartio, flashcache etc... 
 
Cellsrvstat is used to get quick cell level statistics from cell storage. It also helps you to get information 
of offloading and storage index. 
 
[root@dm01celadm01 ~]# cellsrvstat -list 
Statistic Groups: 
io Input/Output related stats 
mem Memory related stats 
exec Execution related stats 
net Network related stats 
smartio SmartIO related stats 
flashcache FlashCache related stats 
health Cellsrv health/events related stats 
offload Offload server related stats 
database Database related stats 
ffi FFI related stats 
lio LinuxBlockIO related stats 
 
Simply running the utility from the command prompt, without any additional parameters or qualifiers, 
produces the output. You can also restrict the output of cellsrvstat by using the -stat_group parameter 
to specify which group, or groups, you want to monitor. 
 
In non-tabular mode, the output has three columns. The first column is the name of the metric, the 
second one is the difference between the last and the current value (delta), and the third column is the 
absolute value. 
In Tabular mode absolute values are printed as is without delta. cellsrvstat -list command points out the 
statistics that are absolute values. 
 
You can get the list of all statistics by executing below command on any of the cell. 
You can also use DCLI utility to get statistics output from each cell at a time and execute it from one of 
your database server. Make sure SSH connectivity is configured between cell and db node. 
 
#dcli -g cellgroup -l root 'cellsrvstat -stat_group=io -interval=30 -count=2' > /tmp/cellstats.txt 
 
Here 
-cellgroup is the file which contains the list of IPs for all cell storage 
-We have used 5 seconds of interval and getting output for 3 times which you can change as per your 
requirement 
-You can mention -stat_group if you want stats for specific group. i.e io, smartio, mem, net etc... 
-We are saving stats output into cellstats.txt file 
 
 
 
 
44 
KEY CONFIGURATION FILES: 
 
cell_disk_config.xml: 
 MS internal dictionary 
 Contains information about the DISKS. 
 Location: /opt/oracle/cell12.1.1.1.1_LINUX.X64_140712/cellsrv/deploy/config/cell_disk_config.xml 
 
cellinit.ora: 
 CELL Initialization Parameters and IP addresses 
 Location: /opt/oracle/cell12.1.1.1.1_LINUX.X64_140712/cellsrv/deploy/config/cellinit.ora 
 
Cell.conf: 
 CELL configuration file. 
 Location: /opt/oracle.cellos/cell.conf 
 
LOG FILES: 
 
$CELLTRACE is the location for cell alert log file; MS log file and trace files. 
 
[root@exa01celadm01 ~]# cd $CELLTRACE 
[root@exa01celadm01 trace]# pwd 
/opt/oracle/cell12.1.1.1.1_LINUX.X64_140712/log/diag/asm/cell/exa01celadm01/trace 
 
[root@exa01celadm01 trace]# ls -ltr *.log 
-rw-rw---- 1 root celladmin 459834 Oct 15 11:24 alert.log 
-rw-r--r-- 1 root celladmin 2356652 Oct 15 13:37 ms-odl.log 
 
BACKGROUND PROCESSES: 
 
The backgroundprocesses for the database and Oracle ASM instance for an Oracle Exadata Storage 
Server environment are the same as other environments, except for the following background process: 
 DISKMON Process 
 XDMG Process 
 XDWK Process 
 
DISKMON Process: 
The DISKMON process is a fundamental component of Oracle Exadata Storage Server Software, and is 
responsible for implementing I/O fencing. The process is located on the database server host computer, 
and is part of Oracle Clusterware Cluster Ready Services (CRS). This process is important for Oracle 
Exadata Storage Server Software and should not be modified. The log files for Diskmon are located in 
the $CRS_HOME/log/hostname/Diskmon directory 
 
 
 
 
45 
XDMG Process: 
The XDMG (Exadata Automation Manager) process initiates automation tasks used for monitoring 
storage. This background process monitors all configured Oracle Exadata Storage Servers for state 
changes, such as replaced disks, and performs the required tasks for such changes. Its primary task is to 
watch for inaccessible disks and cells, and to detect when the disks and cells become accessible. When 
the disks and cells are accessible, the XDMG process initiates the ASM ONLINE process, which is handled 
by the XDWK background process. The XDMG process runs in the Oracle ASM instances. 
 
XDWK Process: 
The XDWK (Exadata Automation Worker) process performs automation tasks by requested by the XDMG 
background process. The XDWK process begins when asynchronous actions, such as ONLINE, DROP or 
ADD for an Oracle ASM disk are requested by the XDMG process. The XDWK process stops after 5 
minutes of inactivity. The XDWK process runs in the Oracle ASM instances. 
 
THE EXADATA CELL: 
 The Exadata Storage server It-self is called as cell. 
 Exadata Storage server contains 2 Types of Disks, CELL DISKS and FLASH DISKS. 
 The EXADATA cell consists of 12 PHYSICAL DISKS (600GB High Performance (or) 4 TB High Capacity) 
 
LAYERS OF THE DISK: 
There are 4 Layers of a DISK in Exadata Storage Server. Cellci is the Command line utility used to 
maintain Disks. 
 
1. Physical Disk: 
 
Physical Disks can be Hard disk or FlashDisk. You cannot create or drop a Physical Disk. The only 
administrative task on that layer can be to turn the LED at the front of the Cell and List Physical Disk. 
 
Examples: 
 CELLCLI> ALTER PHYSICALDISK 20:0 SERVICELED ON 
 CELLCLI> ALTER PHYSICALDISK 20:0 DROP FOR REPLACEMENT 
 CELLCLI> ALTER PHYSICALDISK 20:0 REENABLE 
 CELLCLI> ALTER PHYSICALDISK SERVICE LED ON 
 CELLCLI> LIST PHYSICALDISK 
 
 
 
46 
2. LUN: 
 
Luns are the second layer of abstraction. First two Luns in every Cell contain the Operating System 
(Oracle Enterprise Linux). About 29 GB is reserved in the first 2 Hard disks for this purpose. These two 
disks are redundant to each other. The Cell can still operate if one of the first 2 Hard disks fails. The LUNs 
are equally sized on each Hard disk, but the usable space (for Cell disks resp. Grid disks) is about 30 GB 
less on the first two. As an Administrator, you cannot do anything on the LUN Layer except looking at. 
 
Example: 
CELLCLI > LIST LUN 
 
3. Cell Disk: 
 
Cell disks are the third layer of abstraction. As an Administrator, you could create and drop Cell disks, 
although you will rarely if at all to do that. 
 
Examples: 
CELLCLI> LIST CELLDISK 
CELLCLI> CREATE CELLDISK ALL HARDDISK 
CELLCLI> DROP CELLDISK ALL 
CELLCLI> ALTER CELLDISK 123 name='abc', comment='name was changed to abc' 
 
4. Grid Disk: 
 
Grid disks are the fourth layer of abstraction, and they will be the Candidate Disks to build your ASM 
diskgroups. By default (interleaving=none on the Cell disk layer), the first Grid disk that is created upon a 
Cell disk is placed on the outer sectors of the underlying Hard disk. It will have the best performance. If 
we follow the recommendations, we will create 3 Diskgroups upon our Grid disks: DATA, RECO and 
DBFS_DG. 
 
DATA is supposed to be used as the Database Area (DB_CREATE_FILE_DEST=’+DATA’ on the Database 
Layer), RECO will be the Recovery Area (DB_RECOVERY_FILE_DEST=’+RECO’) and DBFS_DG will be used 
to hold Voting Files & OCR files and DBFS File Systems if needed. It makes sense that DATA has a better 
performance than RECO, and DBFS_DG can be placed on the slowest (inner) part of the Hard disks. 
So as an Administrator, you can (and will, most likely) create and drop Grid disks; Typically 3 Grid disks 
are carved out of each Cell disk. 
 
Examples: 
CELLCLI> LIST GRIDDISK 
CELLCLI> ALTER GRIDDISK 123 name='abc', comment='name was changed to abc' 
CELLCLI> DROP GRIDDISK GD123_0 
CELLCLI> DROP GRIDDISK ALL PREFIX=DATAC1 
CELLCLI> CREATE GRIDDISK GD123_0 celldisk = CD123, size =100M 
CELLCLI> CREATE GRIDDISK ALL PREFIX=data1, size=50M 
 
 
 
47 
 
 Exadata cell software automatically senses the physical disks in each storage server. 
 As a cell administrator you can only view physical disk attributes. Each physical disk is mapped to a 
Logical Unit (LUN). 
 A LUN exposes additional predefined metadata attributes to a cell administrator. You cannot create 
or remove a LUN; they are automatically created. 
 Each of the first two LUNs contains a system area that spans multiple disk partitions. The two system 
areas are mirror copies of each other which are maintained using software mirroring. 
 The system areas consume approximately 29 GB on each disk. The system areas contain the OS 
image, swap space, Exadata cell software binaries, metric and alert repository, and various other 
configuration and metadata files. 
 A cell disk is a higher level abstraction that represents the data storage area on each LUN. For the 
two LUNs that contain the system areas, Exadata cell software recognizes the way that the LUN is 
partitioned and maps the cell disk to the disk partition reserved for data storage. For the other 10 
disks, Exadata cell software maps the cell disk directly to the LUN. 
 
 
 
48 
 After a cell disk is created, it can be subdivided into one or more grid disks, which are directly 
exposed to ASM. 
 
Placing multiple grid disks on a cell disk allows the administrator to segregate the storage into pools with 
different performance characteristics. For example, a cell disk could be partitioned so that one grid disk 
resides on the highest performing portion of the disk (the outermost tracks on the physical disk), 
whereas a second grid disk could be configured on the lower performing portion of the disk (the inner 
tracks). The first grid disk might then be used in an ASM disk group that houses highly active (hot) data, 
while the second grid disk might be used to store less active (cold) data files. 
 
 
 
FLASH DISK: 
 
 Each Exadata cell contains 3TB of high performance flash memory distributed across 4 PCI flash 
memory cards. Each card has 4 flash devices for a total of 16 flash devices on each cell. Each flash 
device has a capacity of 186 GB. 
 Essentially, each flash device is like a physical disk in the storage hierarchy. Each flash device is 
visible to the Exadata cell software as a LUN. You can create a cell disk using the space on a flash-
based LUN. You can then create numerous grid disks on each flash-based cell disk. Unlike physical 
disk devices, the allocation order of flash space is not important from a performance perspective. 
 While it is possible to create flash-based grid disks, the primary use for flash storage is to support 
Exadata Smart Flash Cache, a high-performance caching mechanism for frequently accessed data on 
each Exadata cell. 
 By default, the initial cell configuration process creates flash-based cell disks on all the flash devices, 
and then allocates most of the available flash space to Exadata Smart Flash Cache. 
 
 
 
49 
 To create space for flash-based grid disks, the default Exadata Smart Flash Cache must first be 
dropped. Then a new Exadata Smart Flash Cache and flash-based griddisks can be created using 
sizes chosen by the cell administrator. 
 It is possible to allocate up to one Exadata Smart Flash Cache area and zero or more flash-based grid 
disks from a flash-based cell disk. 
 
 
 
 
CellCLI> list physicaldisk attributes name,physicalSerial,diskType,physicalSize 
 20:0 E0024X HardDisk 3726.023277282715G 
 20:1 EZSENX HardDisk 3726.023277282715G 
 20:2 E07EMX HardDisk 3726.023277282715G 
 20:3 EM297X HardDisk 3726.023277282715G 
 20:4 E03EKX HardDisk 3726.023277282715G 
 20:5 E00B7X HardDisk 3726.023277282715G 
 20:6 ETYK8X HardDisk 3726.023277282715G 
 20:7 ERJWCX HardDisk 3726.023277282715G 
 20:8 ES7EHX HardDisk 3726.023277282715G 
 20:9 ETTAKX HardDisk 3726.023277282715G 
 20:10 EV1JYX HardDisk 3726.023277282715G 
 20:11 ETYBXX HardDisk 3726.023277282715G 
 FLASH_1_0 11000257739 FlashDisk 186.26451539993286G 
 FLASH_1_1 11000257712 FlashDisk 186.26451539993286G 
 FLASH_1_2 11000293811 FlashDisk 186.26451539993286G 
 FLASH_1_3 11000293764 FlashDisk 186.26451539993286G 
 
 
 
50 
 FLASH_2_0 11000299734 FlashDisk 186.26451539993286G 
 FLASH_2_1 11000299720 FlashDisk 186.26451539993286G 
 FLASH_2_2 11000299809 FlashDisk 186.26451539993286G 
 FLASH_2_3 11000299796 FlashDisk 186.26451539993286G 
 FLASH_4_0 11000299803 FlashDisk 186.26451539993286G 
 FLASH_4_1 11000299790 FlashDisk 186.26451539993286G 
 FLASH_4_2 11000300700 FlashDisk 186.26451539993286G 
 FLASH_4_3 11000299794 FlashDisk 186.26451539993286G 
 FLASH_5_0 11000299714 FlashDisk 186.26451539993286G 
 FLASH_5_1 11000299709 FlashDisk 186.26451539993286G 
 FLASH_5_2 11000299708 FlashDisk 186.26451539993286G 
 FLASH_5_3 11000296798 FlashDisk 186.26451539993286G 
 
CellCLI> list lun attributes name,diskType,id,lunSize,isSystemLun 
 
 0_0 HardDisk 0_0 3725.2900390625G TRUE 
 0_1 HardDisk 0_1 3725.2900390625G TRUE 
 0_2 HardDisk 0_2 3725.2900390625G FALSE 
 0_3 HardDisk 0_3 3725.2900390625G FALSE 
 0_4 HardDisk 0_4 3725.2900390625G FALSE 
 0_5 HardDisk 0_5 3725.2900390625G FALSE 
 0_6 HardDisk 0_6 3725.2900390625G FALSE 
 0_7 HardDisk 0_7 3725.2900390625G FALSE 
 0_8 HardDisk 0_8 3725.2900390625G FALSE 
 0_9 HardDisk 0_9 3725.2900390625G FALSE 
 0_10 HardDisk 0_10 3725.2900390625G FALSE 
 0_11 HardDisk 0_11 3725.2900390625G FALSE 
 
 
CellCLI> list celldisk attributes name,diskType,physicalDisk,size 
 
 CD_00_exa01celadm01 HardDisk E0024X 3691.484375G 
 CD_01_exa01celadm01 HardDisk EZSENX 3691.484375G 
 CD_02_exa01celadm01 HardDisk E07EMX 3725.28125G 
 CD_03_exa01celadm01 HardDisk EM297X 3725.28125G 
 CD_04_exa01celadm01 HardDisk E03EKX 3725.28125G 
 CD_05_exa01celadm01 HardDisk E00B&X 3725.28125G 
 FD_00_exa01celadm01 FlashDisk 11000257739 186.25G 
 FD_01_exa01celadm01 FlashDisk 11000257712 186.25G 
 FD_02_exa01celadm01 FlashDisk 11000293811 186.25G 
 FD_03_exa01celadm01 FlashDisk 11000293764 186.25G 
 FD_04_exa01celadm01 FlashDisk 11000299734 186.25G 
 FD_05_exa01celadm01 FlashDisk 11000299720 186.25G 
 FD_06_exa01celadm01 FlashDisk 11000299809 186.25G 
 FD_07_exa01celadm01 FlashDisk 11000299796 186.25G 
 
 
 
 
 
51 
 
CellCLI> list griddisk attributes name,asmDiskGroupName,cellDisk,size 
 
 DATAC1_CD_00_exa01celadm01 DATAC1 CD_00_exa01celadm01 2953G 
 DATAC1_CD_01_exa01celadm01 DATAC1 CD_01_exa01celadm01 2953G 
 DATAC1_CD_02_exa01celadm01 DATAC1 CD_02_exa01celadm01 2953G 
 DATAC1_CD_03_exa01celadm01 DATAC1 CD_03_exa01celadm01 2953G 
 DATAC1_CD_04_exa01celadm01 DATAC1 CD_04_exa01celadm01 2953G 
 DATAC1_CD_05_exa01celadm01 DATAC1 CD_05_exa01celadm01 2953G 
 DBFS_DG_CD_02_exa01celadm01 DBFS_DG CD_02_exa01celadm01 33.796875G 
 DBFS_DG_CD_03_exa01celadm01 DBFS_DG CD_03_exa01celadm01 33.796875G 
 DBFS_DG_CD_04_exa01celadm01 DBFS_DG CD_04_exa01celadm01 33.796875G 
 DBFS_DG_CD_05_exa01celadm01 DBFS_DG CD_05_exa01celadm01 33.796875G 
 RECOC1_CD_00_exa01celadm01 RECOC1 CD_00_exa01celadm01 738.4375G 
 RECOC1_CD_01_exa01celadm01 RECOC1 CD_01_exa01celadm01 738.4375G 
 RECOC1_CD_02_exa01celadm01 RECOC1 CD_02_exa01celadm01 738.4375G 
 RECOC1_CD_03_exa01celadm01 RECOC1 CD_03_exa01celadm01 738.4375G 
 RECOC1_CD_04_exa01celadm01 RECOC1 CD_04_exa01celadm01 738.4375G 
 RECOC1_CD_05_exa01celadm01 RECOC1 CD_05_exa01celadm01 738.4375G 
 
INTERLEAVED GRIDDISKS: 
By default, space for grid disks is allocated from the outer tracks to the inner tracks of a physical disk. So 
the first grid disk created on each cell disk uses the outermost portion of the disk, where each track 
contains more data, resulting in higher transfer rates and better performance. 
However, space for grid disks can be allocated in an interleaved manner. Grid disks that use this type of 
space allocation are referred to as interleaved grid disks. This method effectively equalizes the 
performance of multiple grid disks residing on the same physical disk. 
 
 
 
 
52 
Interleaved grid disks work in conjunction with ASM intelligent data placement (IDP) to ensure that 
primary ASM extents are placed in the faster upper portion of each grid disk, while secondary (mirror) 
extents are placed on the slower lower portion of each grid disk. IDP is automatically enabled when the 
disk group REDUNDANCY setting is compatible with the INTERLEAVING setting for the underlying cell 
disk. 
 
To automatically leverage IDP on a NORMAL redundancy disk group, the underlying cell disks must have 
the attribute setting INTERLEAVING='normal_redundancy'. In this case, all the primary extents are 
placed in the outer half (upper portion) of the disk, and all the mirror extents are place in the inner half 
(lower portion). 
 
To automatically leverage IDP on a HIGH redundancy disk group, the underlying cell disks must have the 
attribute setting INTERLEAVING='high_redundancy'. In this case, all the primary extents are placed in the 
outer third (upper portion) of the disk, and all the mirror extents are place in the inner two-thirds (lower 
portion). 
 
ASM will not allow incompatibility between the disk group REDUNDANCY setting and the INTERLEAVING 
setting for the underlying cell disks. For example, a NORMAL redundancy disk group cannotbe created 
over cell disks with INTERLEAVING='high_redundancy'. ASM will not permit the creation of such a disk 
group, nor will it allow disks to be added to an already existing disk group if that would result in an 
incompatibility. 
 
 
 
 
CREATING INTERLEAVING GRIDDISKS: 
 
CellCLI> CREATE CELLDISK ALL HARDDISK INTERLEAVING='normal_redundancy' 
CellCLI> CREATE GRIDDISK ALL PREFIX=DATAC1, SIZE=2953G 
CellCLI> CREATE GRIDDISK ALL PREFIX=RECOC1, SIZE=738G 
CellCLI> CREATE GRIDDISK ALL PREFIX=DBFS 
 
 
 
53 
CellCLI> list griddisk attributes asmDiskName,diskType,offset,size 
 
 DATAC1_CD_00_EXA01CELADM01 HardDisk 32M 2953G 
 DATAC1_CD_01_EXA01CELADM01 HardDisk 32M 2953G 
 DATAC1_CD_02_EXA01CELADM01 HardDisk 32M 2953G 
 DATAC1_CD_03_EXA01CELADM01 HardDisk 32M 2953G 
 DATAC1_CD_04_EXA01CELADM01 HardDisk 32M 2953G 
 DATAC1_CD_05_EXA01CELADM01 HardDisk 32M 2953G 
 RECOC1_CD_00_EXA01CELADM01 HardDisk 2953.046875G 738.4375G 
 RECOC1_CD_01_EXA01CELADM01 HardDisk 2953.046875G 738.4375G 
 RECOC1_CD_02_EXA01CELADM01 HardDisk 2953.046875G 738.4375G 
 RECOC1_CD_03_EXA01CELADM01 HardDisk 2953.046875G 738.4375G 
 RECOC1_CD_04_EXA01CELADM01 HardDisk 2953.046875G 738.4375G 
 RECOC1_CD_05_EXA01CELADM01 HardDisk 2953.046875G 738.4375G 
 DBFS_DG_CD_02_EXA01CELADM01 HardDisk 3691.484375G 33.796875G 
 DBFS_DG_CD_03_EXA01CELADM01 HardDisk 3691.484375G 33.796875G 
 DBFS_DG_CD_04_EXA01CELADM01 HardDisk 3691.484375G 33.796875G 
 DBFS_DG_CD_05_EXA01CELADM01 HardDisk 3691.484375G 33.796875G 
 
DISK_REPAIR_TIME: 
 
If a grid disk remains offline longer than the time specified by the disk_repair_time attribute, then 
Oracle ASM force drops that grid disk and starts a rebalance to restore data redundancy. 
 
The Oracle ASM disk repair timer represents the amount of time a disk can remain offline before it is 
dropped by Oracle ASM. While the disk is offline, Oracle ASM tracks the changed extents so the disk can 
be resynchronized when it comes back online. The default disk repair time is 3.6 hours. If the default is 
inadequate, then the attribute value can be changed to the maximum amount of time it might take to 
detect and repair a temporary disk failure. The following command is an example of changing the disk 
repair timer value to 8.5 hours for the DATA disk group: 
 
ALTER DISKGROUP data SET ATTRIBUTE 'disk_repair_time' = '8.5h' 
 
The disk_repair_time attribute does not change the repair timer for disks currently offline. The repair 
timer for those offline disks is either the default repair timer or the repair timer specified on the 
command line when the disks were manually set to offline. To change the repair timer for currently 
offline disks, use the OFFLINE command and specify a repair timer value. The following command is an 
example of changing the disk repair timer value for disks that are offline: 
 
ALTER DISKGROUP data OFFLINE DISK data_CD_06_cell11 DROP AFTER 20h; 
 
Note: When the disk repair time value is increased, the vulnerability of a double failure is increased. 
 
 
 
 
 
54 
CELLCLI: 
 
The storage cells in Exadata Database Machine are managed via two tools called CELL Command Line 
Interface (CellCLI) and Dynamic Command Line Interface (DCLI). The CELLCLI command is invoked from 
the Linux command line in the storage cells, not in the compute nodes. CellCLI show the CellCLI> prompt 
where you will enter the commands. 
 
The CellCLI commands have the following general structure: 
<Verb> <Object> <Modifier> <Filter> 
 
A verb is what you want or the action, e.g. display something. 
An object is what you want the action on, e.g. a disk. 
A modifier (optional) shows how you want to operation to be modified, e.g. all disks, or a specific disk. 
A filter (optional) is similar to the WHERE predicate of a SQL statement, used with WHERE clause. 
 
There are only a few primary verbs you will use mostly and need to remember. They are: 
 
LIST – to show something, e.g. disks, statistics, Resource Manager Plans, etc. 
CREATE – to create something, e.g. a cell disk, a threshold 
ALTER – to change something that has been established, e.g. change the size of a disk 
DROP – to delete something, e.g. a dropping a disk 
DESCRIBE – to display the various attributes of an object 
 
HOW TO INVOKE CELLCLI: 
 
[root@exa01celadm01 ~]# cellcli –e <command> 
 
OR 
 
[root@exa01celadm01 ~]# cellcli 
CellCLI: Release 12.1.1.1.1 - Production on Mon Oct 05 09:43:54 EDT 2015 
 
Copyright (c) 2007, 2013, Oracle. All rights reserved. 
Cell Efficiency Ratio: 5,134 
 
 
CellCLI> help list 
 
 Enter HELP LIST <object_type> for specific help syntax. 
 <object_type>: {ACTIVEREQUEST | ALERTHISTORY | ALERTDEFINITION | CELL 
 | CELLDISK | DATABASE | FLASHCACHE | FLASHLOG | FLASHCACHECONTENT 
 | GRIDDISK | IBPORT | IORMPLAN | KEY | LUN 
 | METRICCURRENT | METRICDEFINITION | METRICHISTORY 
 | PHYSICALDISK | QUARANTINE | THRESHOLD} 
 
 
 
 
55 
CellCLI> help list CELLDISK 
 
 Usage: LIST CELLDISK [<name> | <filters>] [<attribute_list>] [DETAIL] 
 Purpose: Displays specified attributes for cell disks. 
 Arguments: 
 <name>: The name of the cell disk to be displayed. 
 <filters>: an expression which determines which cell disks should 
 be displayed. 
 <attribute_list>: The attributes that are to be displayed. 
 ATTRIBUTES {ALL | attr1 [, attr2]... } 
 Options: 
 [DETAIL]: Formats the display as an attribute on each line, with 
 an attribute descriptor preceding each value. 
 Examples: 
 LIST CELLDISK cd1 DETAIL 
 LIST CELLDISK where freespace > 100M 
 
CellCLI> list CELLDISK 
 CD_00_exa01celadm01 normal 
 CD_01_exa01celadm01 normal 
 CD_02_exa01celadm01 normal 
 CD_03_exa01celadm01 normal 
 CD_04_exa01celadm01 normal 
 CD_05_exa01celadm01 normal 
 FD_00_exa01celadm01 normal 
 FD_01_exa01celadm01 normal 
 FD_02_exa01celadm01 normal 
 FD_03_exa01celadm01 normal 
 FD_04_exa01celadm01 normal 
 FD_05_exa01celadm01 normal 
 FD_06_exa01celadm01 normal 
 FD_07_exa01celadm01 normal 
 
CellCLI> list CELLDISK CD_00_exa01celadm01 detail 
 name: CD_00_exa01celadm01 
 comment: 
 creationTime: 2014-07-08T11:13:35-04:00 
 deviceName: /dev/sda 
 devicePartition: /dev/sda3 
 diskType: HardDisk 
 errorCount: 0 
 freeSpace: 0 
 id: xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxxx 
 interleaving: none 
 lun: 0_0 
 physicalDisk: E0073X 
 
 
 
56 
 raidLevel: 0 
 size: 3691.484375G 
 status: normal 
 
CellCLI> list CELLDISK attributes name,size,status 
 CD_00_exa01celadm01 3691.484375G normal 
 CD_01_exa01celadm01 3691.484375G normal 
 CD_02_exa01celadm01 3725.28125G normal 
 CD_03_exa01celadm01 3725.28125G normal 
 CD_04_exa01celadm01 3725.28125G normal 
 CD_05_exa01celadm01 3725.28125G normal 
 FD_00_exa01celadm01 186.25G normal 
 FD_01_exa01celadm01 186.25G normal 
 FD_02_exa01celadm01 186.25G normal

Continue navegando