Create a Virtual Machine for Exadata CELL or Storage Server
In this topic, I will demonstrate how to configure Cell or Storage Server for Exadata on VM. Very few companies are using Exadata Engineering system so this is very difficult for a DBA to get hands on Exadata. So Exadata simulation will be a good option to learn for starting carrier as a DMA (Database Machine Administrator).
Task List:
- Required Software Download
- Hardware Requirements
- Create a Virtual Machine
- Install Oracle Linux
- Prerequisite for Exadata Storage Server
- Install the Exadata Storage Server Software
- Clone Storage Server for 2nd VM of Exadata Storage Server
- Create Cell disks and Flash Cache storage for Exadata Storage Server on Node 1
- Create Cell disks and Flash Cache storage for Exadata Storage Server on Node 2
- Configure Compute Node or Database Node
- Install Oracle Grid and Database
Oracle Doc: Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)
Version Compatibility and Requirements:
Database Server Required Operating System
Software Version:
- Oracle Storage Server Software: 11.2.3.2.1
- Oracle Linux 5.9
- Oracle GI and Database: 11.2.0.4.0
- JDK 1.5
Step 1: Required Software Download:
- Download Oracle VM and Install: VM Box
- Download Exadata Storage Server Software form Oracle: Download
- Download Oracle Linux: Oracle Linux
1.1. Download Oracle VM and Install: VM Box
1.2 Download Exadata Storage Server Software form Oracle: Download
1.3. Download Oracle Linux: Oracle Linux
Step 2: Hardware Requirements:
- Virtual Machine: At least 2 Storage Server and two Compute Nodes on VM
- Memory: At least 4 GB memory is required for Storage Server and 2-3 GB for DB Server. So for 4 VMs, need around 12-14 GB memory
- Storage: Storage Server -> 40 GB x 2 = 80 GB and Compute Node: 40×2 GB = 80 GB. Total around 160 GB
Environment:
- Storage Server:
- CELL Node 1:
- Host Name: exadatacell01
- Public IP: 192.168.56.60
- Private IP: 192.168.2.60
- CELL Node 2:
- Host Name: exadatacell02
- Public IP: 192.168.56.70
- Private IP: 192.168.2.70
- CELL Node 1:
- Compute Node: (Two nodes are mandatory for RAC but you may test standalone DB and one compute node will be file)
- Compute Node 1:
- Host Name: exadatadb01
- Public IP: 192.168.56.80
- Private IP: 192.168.2.80
- Compute Node 2:
- Host Name: exadatadb02
- Public IP: 192.168.56.90
- Private IP: 192.168.2.80
- Scan IP:
- 192.168.56.30
- 192.168.56.40
- 192.168.56.50
- Compute Node 1:
Step 3: Create a Virtual Machine: exadatacell01
How to Create VM: Create a Virtual Machine
Step 4: Install Oracle Linux on VM
Step 5: Prerequisite for Exadata Storage Server:
5.1. Add IP and Hostname on /etc/hosts file.
[root@exadatacell01 ~]# cat /etc/hosts 127.0.0.1 localhost.localdomain localhost #::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.56.60 exadatacell01.localdomain exadatacell01 192.168.56.70 exadatacell02.localdomain exadatacell02 192.168.2.60 exadatacell01-ib.localdomain exadatacell01-ib 192.168.2.70 exadatacell02-ib.localdomain exadatacell02-ib [root@exadatacell01 ~]# ping exadatacell01 PING exadatacell01.localdomain (192.168.56.60) 56(84) bytes of data. 64 bytes from exadatacell01.localdomain (192.168.56.60): icmp_seq=1 ttl=64 time=0.019 ms 64 bytes from exadatacell01.localdomain (192.168.56.60): icmp_seq=2 ttl=64 time=0.035 ms
5.2. Set Kernel Parameters:
[root@exadatacell01 ~]# vi /etc/sysctl.conf ##### Exadata################ fs.file-max = 655360 fs.aio-max-nr=50000000 net.core.rmem_default=4194304 net.core.rmem_max=4194304 net.core.wmem_default=4194304 net.core.wmem_max=4194304 ##### Exadata##################
[root@exadatacell01 media]# sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 fs.file-max = 65536 fs.aio-max-nr = 50000000 net.core.rmem_default = 4194304 net.core.rmem_max = 4194304 net.core.wmem_default = 4194304 net.core.wmem_max = 4194304
[root@exadatacell01 ~]# vi /etc/security/limits.conf * soft nofile 655360 * hard nofile 655360 #End of file
[root@exadatacell01 media]# cat /etc/grub.conf | grep default default=0 #Change value for default to 1 [root@exadatacell01 media]# vi /etc/grub.conf [root@exadatacell01 media]# cat /etc/grub.conf | grep default default=1
#Add below line: [root@exadatacell01 media]# vi /etc/bashrc export DISPLAY=:0 [root@exadatacell01 media]# cat /etc/bashrc | grep DISPLAY export DISPLAY=:0
5.3. Disable Firewall and SELinux
[root@exadatacell01 ~]# chkconfig ip6tables off [root@exadatacell01 ~]# service ip6tables stop ip6tables: Setting chains to policy ACCEPT: filter [ OK ] ip6tables: Flushing firewall rules: [ OK ] ip6tables: Unloading modules: [ OK ] [root@exadatacell01 ~]# chkconfig iptables off [root@exadatacell01 ~]# service iptables stop [root@exadatacell01 ~]# cat /etc/selinux/config [root@exadatacell01 ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted
Change the SELINUX value to “SELINUX=permissive”
[root@exadatacell01 ~]# vi /etc/selinux/config SELINUX=disabled [root@exadatacell01 ~]# getenforce Disabled
Reboot the System.
5.4 Create Directories:
[root@exadatacell01 ~]# mkdir /var/log/oracle [root@exadatacell01 ~]# chmod 775 /var/log/oracle [root@exadatacell01 ~]# cd / [root@exadatacell01 /]# mkdir storage_server_software
5.5. Load RDS Kernel Modules:
Add below line to /etc/modprobe.d/rds.conf:
[root@exadatacell01 /]# modprobe rds
[root@exadatacell01 /]# modprobe rds_tcp
[root@exadatacell01 /]# modprobe rds_rdma
#Add below line to /etc/modprobe.d/rds.conf
[root@exadatacell01 /]# cat /etc/modprobe.d/rds.conf
install rds /sbin/modprobe --ignore-install rds && /sbin/modprobe rds_tcp && /sbin/modprobe rds_rdma
5.6. Remove the conflicting installation package if it already exists
[root@exadatacell01 /]# rpm -qa | egrep 'rds|rdma' rdma-6.8_4.1-1.el6.noarch words-3.0-17.el6.noarch [root@exadatacell01 /]# rpm -e rdma [root@exadatacell01 /]# rpm -qa | egrep 'rds|rdma' words-3.0-17.el6.noarch
Step 6. Install the Exadata Storage Server Software
6.1. Copy software from your local disk to VM and unzip the software. You may use winscp software to copy on VM.
[root@exadatacell01 storage_server_software]# pwd /storage_server_software [root@exadatacell01 storage_server_software]# ls V36290-01.zip [root@exadatacell01 storage_server_software]# unzip V36290-01.zip Archive: V36290-01.zip inflating: README.txt inflating: cellImageMaker_11.2.3.2.1_LINUX.X64_130109-1.x86_64.tar [root@exadatacell01 storage_server_software]# tar -xvf cellImageMaker_11.2.3.2.1_LINUX.X64_130109-1.x86_64.tar dl180/ dl180/boot/ dl180/boot/boot.msg dl180/boot/boot.cat dl180/boot/memtest dl180/boot/vmlinuz ..... dl180/grub/ufs2_stage1_5 dl180/grub/stage2 dl180/grub/iso9660_stage1_5 dl180/grub/fat_stage1_5 dl180/grub/ffs_stage1_5
6.2. Unzip cell.bin
[root@exadatacell01 cellbits]# pwd /storage_server_software/dl180/boot/cellbits [root@exadatacell01 cellbits]# ls -lrt total 1448304 -rwxrwxr-x 1 root root 245231205 Jan 9 2013 cell.bin -rw-rw-r-- 1 root root 55704927 Jan 9 2013 doclib.zip -rw-rw-r-- 1 root root 141444416 Jan 9 2013 cellfw.tbz -rw-rw-r-- 1 root root 199485158 Jan 9 2013 exaos.tbz -rw-rw-r-- 1 root root 12705374 Jan 9 2013 cellboot.tbz -rw-rw-r-- 1 root root 16165382 Jan 9 2013 ofed.tbz -rw-rw-r-- 1 root root 12485584 Jan 9 2013 sunutils.tbz -rw-rw-r-- 1 root root 18186084 Jan 9 2013 hputils.tbz -rw-rw-r-- 1 root root 208612489 Jan 9 2013 commonos.tbz -rw-rw-r-- 1 root root 53387742 Jan 9 2013 kernel.tbz -rw-rw-r-- 1 root root 375683818 Jan 9 2013 debugos.tbz -rw-rw-r-- 1 root root 142434203 Jan 9 2013 cellrpms.tbz -rw-rw-r-- 1 root root 729 Jan 9 2013 c7rpms.tbz [root@exadatacell01 cellbits]# unzip cell.bin Archive: cell.bin warning [cell.bin]: 6408 extra bytes at beginning or within zipfile (attempting to process anyway) inflating: cell-11.2.3.2.1_LINUX.X64_130109-1.x86_64.rpm inflating: jdk-1_5_0_15-linux-amd64.rpm
6.3. Install JDK
[root@exadatacell01 cellbits]# pwd /storage_server_software/dl180/boot/cellbits [root@exadatacell01 cellbits]# rpm -ivh jdk-1_5_0_15-linux-amd64.rpm Preparing... ########################################### [100%] 1:jdk ########################################### [100%] [root@exadatacell01 jdk1.5.0_15]# export JAVA_HOME=/usr/java/jdk1.5.0_15 [root@exadatacell01 jdk1.5.0_15]# export PATH=$JAVA_HOME/bin:$PATH [root@exadatacell01 jdk1.5.0_15]# java -version java version "1.5.0_15" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_15-b04) Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_15-b04, mixed mode)
6.4. Install required packages for CELL rpm:
If you have internet connection on VM then you will able to run following commands and it will download from Internet. Otherwise you need to configure yum server manually.
[root@exadatacell01 ~]# yum install net-snmp Loaded plugins: rhnplugin, security This system is not registered with ULN. You can use up2date --register to register. ULN support will be disabled. Setting up Install Process Resolving Dependencies --> Running transaction check --> Processing Dependency: net-snmp = 1:5.3.2.2-20.0.2.el5 for package: net-snmp-perl --> Processing Dependency: net-snmp = 1:5.3.2.2-20.0.2.el5 for package: net-snmp-utils ---> Package net-snmp.x86_64 1:5.3.2.2-25.0.2.el5_11 set to be updated --> Processing Dependency: net-snmp-libs = 1:5.3.2.2-25.0.2.el5_11 for package: net-snmp --> Running transaction check ---> Package net-snmp-libs.x86_64 1:5.3.2.2-25.0.2.el5_11 set to be updated ---> Package net-snmp-perl.x86_64 1:5.3.2.2-25.0.2.el5_11 set to be updated ---> Package net-snmp-utils.x86_64 1:5.3.2.2-25.0.2.el5_11 set to be updated --> Finished Dependency Resolution Dependencies Resolved =================================================================================================================================== Package Arch Version Repository Size =================================================================================================================================== Updating: net-snmp x86_64 1:5.3.2.2-25.0.2.el5_11 el5_latest 708 k Updating for dependencies: net-snmp-libs x86_64 1:5.3.2.2-25.0.2.el5_11 el5_latest 1.3 M net-snmp-perl x86_64 1:5.3.2.2-25.0.2.el5_11 el5_latest 203 k net-snmp-utils x86_64 1:5.3.2.2-25.0.2.el5_11 el5_latest 194 k Transaction Summary =================================================================================================================================== Install 0 Package(s) Upgrade 4 Package(s) Total download size: 2.4 M Is this ok [y/N]: y Downloading Packages: (1/4): net-snmp-utils-5.3.2.2-25.0.2.el5_11.x86_64.rpm | 194 kB 00:00 (2/4): net-snmp-perl-5.3.2.2-25.0.2.el5_11.x86_64.rpm | 203 kB 00:00 (3/4): net-snmp-5.3.2.2-25.0.2.el5_11.x86_64.rpm | 708 kB 00:00 (4/4): net-snmp-libs-5.3.2.2-25.0.2.el5_11.x86_64.rpm | 1.3 MB 00:00 ----------------------------------------------------------------------------------------------------------------------------------- Total 335 kB/s | 2.4 MB 00:07 warning: rpmts_HdrFromFdno: Header V3 DSA signature: NOKEY, key ID 1e5e0159 el5_latest/gpgkey | 1.4 kB 00:00 Importing GPG key 0x1E5E0159 "Oracle OSS group (Open Source Software group) <[email protected]>" from http://public-yum.oracle.com/RPM-GPG-KEY-oracle-el5 Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Updating : net-snmp-libs 1/8 Updating : net-snmp 2/8 Updating : net-snmp-perl 3/8 Updating : net-snmp-utils 4/8 Cleanup : net-snmp 5/8 Cleanup : net-snmp-utils 6/8 Cleanup : net-snmp-perl 7/8 Cleanup : net-snmp-libs 8/8 Updated: net-snmp.x86_64 1:5.3.2.2-25.0.2.el5_11 Dependency Updated: net-snmp-libs.x86_64 1:5.3.2.2-25.0.2.el5_11 net-snmp-perl.x86_64 1:5.3.2.2-25.0.2.el5_11 net-snmp-utils.x86_64 1:5.3.2.2-25.0.2.el5_11 Complete!
Note::: Node one is ready to install Exadata Storage Software. As we are planing to configure two Nodes for Storage, So Better to make clone of Node 1 for Node 2 then don’t need repeat all above Steps for Node 2.
NOTE::: Clone Storage Server for 2nd VM of Exadata Storage Server (exadatacell02)
You may check below to link to clone the VM:
Clone Cell Node 1 to make Node 2: How to Clone VM
6.5. Set environment variable for correct version of Java
[root@exadatacell01 jdk1.5.0_15]# export JAVA_HOME=/usr/java/jdk1.5.0_15 [root@exadatacell01 jdk1.5.0_15]# export PATH=$JAVA_HOME/bin:$PATH [root@exadatacell01 jdk1.5.0_15]# java -version java version "1.5.0_15" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_15-b04) Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_15-b04, mixed mode)
6.6. Install Exadata Storage Server RPM
[root@exadatacell02 cellbits]# pwd /storage_server_software/dl180/boot/cellbits [root@exadatacell01 cellbits]# rpm -ivh cell-11.2.3.2.1_LINUX.X64_130109-1.x86_64.rpm Preparing... ########################################### [100%] Pre Installation steps in progress ... 1:cell ########################################### [100%] Post Installation steps in progress ... Set cellusers group for /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log directory Set 775 permissions for /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log directory / / Installation SUCCESSFUL. Starting RS and MS... as user celladmin Done. Please Login as user celladmin and create cell to startup CELLSRV to complete cell configuration. WARNING: Using the current shell as root to restart cell services. Restart the cell services using a new shell.
6.7. Validation
[root@exadatacell01 cellbits]# su - celladmin [celladmin@exadatacell01 ~]$ ls [celladmin@exadatacell01 ~]$ cellcli CellCLI: Release 11.2.3.2.1 - Production on Tue Jul 07 00:57:41 EDT 2020 Copyright (c) 2007, 2012, Oracle. All rights reserved. Cell Efficiency Ratio: 1 CellCLI> alter cell restart services all Stopping the RS, CELLSRV, and MS services... The SHUTDOWN of services was successful. Starting the RS, CELLSRV, and MS services... Getting the state of RS services... running Starting CELLSRV services... The STARTUP of CELLSRV services was not successful. CELL-01547: CELLSRV startup failed due to unknown reasons. Starting MS services... The STARTUP of MS services was successful.
CELL-01547: CELLSRV startup failed due to unknown reasons: This error is generating because of no missing the ipaddress1 parameter on cellint.ora file.
[celladmin@exadatacell01 config]$ pwd /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config [celladmin@exadatacell01 config]$ ls -lrt cellinit.ora -rw-r--r-- 1 celladmin root 135 Jul 7 00:56 cellinit.ora [celladmin@exadatacell01 config]$ cat cellinit.ora #CELL Initialization Parameters version=0.0 DEPLOYED=TRUE HTTP_PORT=8888 RMI_PORT=23791 SSL_PORT=23943 JMS_PORT=9127 BMC_SNMP_PORT=162 [celladmin@exadatacell01 config]$ # Add the line to cellinit.ora file : # ipaddress1=192.168.2.60/24 [celladmin@exadatacell01 config]$ vi cellinit.ora #CELL Initialization Parameters version=0.0 DEPLOYED=TRUE HTTP_PORT=8888 RMI_PORT=23791 SSL_PORT=23943 JMS_PORT=9127 BMC_SNMP_PORT=162 ipaddress1=192.168.2.60/24
6.7.1. RS, CELLSRV and MS Services have been started successfully
[celladmin@exadatacell01 ~]$ cellcli CellCLI: Release 11.2.3.2.1 - Production on Tue Jul 07 01:32:09 EDT 2020 Copyright (c) 2007, 2012, Oracle. All rights reserved. Cell Efficiency Ratio: 1 CellCLI> alter cell restart services all Stopping the RS, CELLSRV, and MS services... The SHUTDOWN of services was successful. Starting the RS, CELLSRV, and MS services... Getting the state of RS services... running Starting CELLSRV services... The STARTUP of CELLSRV services was successful. Starting MS services... The STARTUP of MS services was successful.
Step 7: Create Cell disks and Flash Cache storage for Exadata Storage Server on Node 1
- Create 10 disks of 1024 mb – these are for the Cell disks
- Create 4 disks of 600 mb – these are for the Flash disks
7.1. Add 10 storage for cell disks
C:\Users\samad>cd "c:\Program Files\Oracle\VirtualBox" VBoxManage createhd --filename "C:\VM\exadatacell01\hd1.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell01\hd2.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell01\hd3.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell01\hd4.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell01\hd5.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell01\hd6.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell01\hd7.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell01\hd8.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell01\hd9.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell01\hd10.vdi" --size 1024 --format VDI --variant Fixed
7.2. Add 4 storage for Flash Disks
C:\Users\samad>cd "c:\Program Files\Oracle\VirtualBox" VBoxManage createhd --filename "C:\VM\exadatacell01\hd11.vdi" --size 600 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell01\hd12.vdi" --size 600 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell01\hd13.vdi" --size 600 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell01\hd14.vdi" --size 600 --format VDI --variant Fixed
7.3. Run these command using cmd command prompt
C:\Users\samad>cd "c:\Program Files\Oracle\VirtualBox" c:\Program Files\Oracle\VirtualBox>VBoxManage createhd --filename "C:\VM\exadatacell01\hd1.vdi" --size 1024 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 5eac0693-c88f-4b7a-81ef-e5f128b019bb c:\Program Files\Oracle\VirtualBox>VBoxManage createhd --filename "C:\VM\exadatacell01\hd2.vdi" --size 1024 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 2d631e61-b514-4806-8dab-ed2ca2811af8 ....
7.4. VM Machine
- VM Name: exadatacell01
- Path of VM C:\VM\exadatacell01\
7.4. Attach created storage or disk to VM
7.4.1.For Cell Disk
Note::: VM Name and Path may need to change for your case for below script.
VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 1 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd1.vdi" --mtype normal VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 2 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd2.vdi" --mtype normal VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 3 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd3.vdi" --mtype normal VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 4 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd4.vdi" --mtype normal VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 5 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd5.vdi" --mtype normal VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 6 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd6.vdi" --mtype normal VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 7 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd7.vdi" --mtype normal VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 8 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd8.vdi" --mtype normal VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 9 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd9.vdi" --mtype normal VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 10 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd10.vdi" --mtype normal
7.4.2. For Flash Disk
VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 11 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd11.vdi" --mtype normal VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 12 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd12.vdi" --mtype normal VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 13 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd13.vdi" --mtype normal VBoxManage storageattach exadatacell01 --storagectl "SATA" --port 14 --device 0 --type hdd --medium "C:\VM\exadatacell01\hd14.vdi" --mtype normal
7.5. Create directories for the Storage Server physical disks:
[root@exadatacell01 cellbits]# cd /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109 [root@exadatacell01 cell11.2.3.2.1_LINUX.X64_130109]# mkdir -p disks/raw [root@exadatacell01 cell11.2.3.2.1_LINUX.X64_130109]# cd disks/raw [root@exadatacell01 raw]# pwd /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw
7.6. Create Symbolic Disks/Links:
[root@exadatacell01 raw]# pwd /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw [root@exadatacell01 raw]# fdisk -l 2>/dev/null | grep 'MB' Disk /dev/sdb: 1073 MB, 1073741824 bytes Disk /dev/sdc: 1073 MB, 1073741824 bytes Disk /dev/sdd: 1073 MB, 1073741824 bytes Disk /dev/sde: 1073 MB, 1073741824 bytes Disk /dev/sdf: 1073 MB, 1073741824 bytes Disk /dev/sdg: 1073 MB, 1073741824 bytes Disk /dev/sdh: 1073 MB, 1073741824 bytes Disk /dev/sdi: 1073 MB, 1073741824 bytes Disk /dev/sdj: 1073 MB, 1073741824 bytes Disk /dev/sdk: 1073 MB, 1073741824 bytes Disk /dev/sdl: 629 MB, 629145600 bytes Disk /dev/sdm: 629 MB, 629145600 bytes Disk /dev/sdn: 629 MB, 629145600 bytes Disk /dev/sdo: 629 MB, 629145600 bytes
[root@exadatacell01 raw]# pwd /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw ln -s /dev/sdb exadatacell01_DISK00 ln -s /dev/sdc exadatacell01_DISK01 ln -s /dev/sdd exadatacell01_DISK02 ln -s /dev/sde exadatacell01_DISK03 ln -s /dev/sdf exadatacell01_DISK04 ln -s /dev/sdg exadatacell01_DISK05 ln -s /dev/sdh exadatacell01_DISK06 ln -s /dev/sdi exadatacell01_DISK07 ln -s /dev/sdj exadatacell01_DISK08 ln -s /dev/sdk exadatacell01_DISK09 ln -s /dev/sdl exadatacell01_FLASH00 ln -s /dev/sdm exadatacell01_FLASH01 ln -s /dev/sdn exadatacell01_FLASH02 ln -s /dev/sdo exadatacell01_FLASH03
[root@exadatacell01 raw]# ls -lrt lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_DISK00 -> /dev/sdb lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_DISK01 -> /dev/sdc lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_DISK02 -> /dev/sdd lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_DISK03 -> /dev/sde lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_DISK04 -> /dev/sdf lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_DISK05 -> /dev/sdg lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_DISK06 -> /dev/sdh lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_DISK07 -> /dev/sdi lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_DISK08 -> /dev/sdj lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_DISK09 -> /dev/sdk lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_FLASH00 -> /dev/sdl lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_FLASH01 -> /dev/sdm lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_FLASH02 -> /dev/sdn lrwxrwxrwx. 1 root root 8 Jul 5 18:32 exadatacell01_FLASH03 -> /dev/sdo
7.6.1 Create Script for symbolic links
[root@exadatacell01 raw]# cat symbolic_link.sh ln -s /dev/sdb /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK00 ln -s /dev/sdc /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK01 ln -s /dev/sdd /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK02 ln -s /dev/sde /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK03 ln -s /dev/sdf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK04 ln -s /dev/sdg /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK05 ln -s /dev/sdh /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK06 ln -s /dev/sdi /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK07 ln -s /dev/sdj /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK08 ln -s /dev/sdk /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK09 ln -s /dev/sdl /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_FLASH00 ln -s /dev/sdm /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_FLASH01 ln -s /dev/sdn /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_FLASH02 ln -s /dev/sdo /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_FLASH03
7.7. Ready to Create Storage cell
7.7.1. Connect with CELLADMIN and Create the Storage cell
[celladmin@exadatacell01 ~]$ cellcli -e create cell interconnect1=eth1 CELL-01518: Stop CELLSRV. Create Cell cannot continue with CELLSRV running.
Note::: Getting above error because of CELLSRV already running. So need to stop CELLSRV service
[celladmin@exadatacell01 ~]$ cellcli CellCLI: Release 11.2.3.2.1 - Production on Tue Jul 07 01:41:34 EDT 2020 Copyright (c) 2007, 2012, Oracle. All rights reserved. Cell Efficiency Ratio: 1 CellCLI> alter cell shutdown services CELLSRV Stopping CELLSRV services... The SHUTDOWN of CELLSRV services was successful.
[celladmin@exadatacell01 ~]$ cellcli -e create cell interconnect1=eth1 Cell exadatacell01 successfully created Starting CELLSRV services... The STARTUP of CELLSRV services was successful. Flash cell disks, FlashCache, and FlashLog will be created... CellDisk FD_00_exadatacell01 successfully created CellDisk FD_01_exadatacell01 successfully created CellDisk FD_02_exadatacell01 successfully created CellDisk FD_03_exadatacell01 successfully created Flash log exadatacell01_FLASHLOG successfully created Flash cache exadatacell01_FLASHCACHE successfully created
7.7.2. Create Cell Disk
To get list of commands in cellcli:
[celladmin@exadatacell01 ~]$ cellcli CellCLI: Release 11.2.3.2.1 - Production on Tue Jul 14 14:12:49 EDT 2020 Copyright (c) 2007, 2012, Oracle. All rights reserved. Cell Efficiency Ratio: 1 CellCLI> help HELP [topic] Available Topics: ALTER ALTER ALERTHISTORY ALTER CELL ALTER CELLDISK ALTER FLASHCACHE ALTER GRIDDISK ALTER IBPORT ALTER IORMPLAN ALTER LUN ALTER PHYSICALDISK ALTER QUARANTINE ALTER THRESHOLD ASSIGN KEY CALIBRATE CREATE CREATE CELL CREATE CELLDISK CREATE FLASHCACHE CREATE FLASHLOG CREATE GRIDDISK CREATE KEY CREATE QUARANTINE CREATE THRESHOLD DESCRIBE DROP DROP ALERTHISTORY DROP CELL DROP CELLDISK DROP FLASHCACHE DROP FLASHLOG DROP GRIDDISK DROP QUARANTINE DROP THRESHOLD EXPORT CELLDISK IMPORT CELLDISK LIST LIST ACTIVEREQUEST LIST ALERTDEFINITION LIST ALERTHISTORY LIST CELL LIST CELLDISK LIST FLASHCACHE LIST FLASHCACHECONTENT LIST FLASHLOG LIST GRIDDISK LIST IBPORT LIST IORMPLAN LIST KEY LIST LUN LIST METRICCURRENT LIST METRICDEFINITION LIST METRICHISTORY LIST PHYSICALDISK LIST QUARANTINE LIST THRESHOLD SET SPOOL START
CellCLI> create celldisk all CellDisk CD_DISK00_exadatacell01 successfully created CellDisk CD_DISK01_exadatacell01 successfully created CellDisk CD_DISK02_exadatacell01 successfully created CellDisk CD_DISK03_exadatacell01 successfully created CellDisk CD_DISK04_exadatacell01 successfully created CellDisk CD_DISK05_exadatacell01 successfully created CellDisk CD_DISK06_exadatacell01 successfully created CellDisk CD_DISK07_exadatacell01 successfully created CellDisk CD_DISK08_exadatacell01 successfully created CellDisk CD_DISK09_exadatacell01 successfully created
7.7.3. Show Created Cell Disk
CellCLI> list celldisk CD_DISK00_exadatacell01 normal CD_DISK01_exadatacell01 normal CD_DISK02_exadatacell01 normal CD_DISK03_exadatacell01 normal CD_DISK04_exadatacell01 normal CD_DISK05_exadatacell01 normal CD_DISK06_exadatacell01 normal CD_DISK07_exadatacell01 normal CD_DISK08_exadatacell01 normal CD_DISK09_exadatacell01 normal FD_00_exadatacell01 normal FD_01_exadatacell01 normal FD_02_exadatacell01 normal FD_03_exadatacell01 normal
7.7.4. Show Created Flash Disk
CellCLI> list celldisk where disktype=flashdisk FD_00_exadatacell01 normal FD_01_exadatacell01 normal FD_02_exadatacell01 normal FD_03_exadatacell01 normal
7.7.5. Few Important Commands
CellCLI> list celldisk where disktype=harddisk CD_DISK00_exadatacell01 normal CD_DISK01_exadatacell01 normal CD_DISK02_exadatacell01 normal CD_DISK03_exadatacell01 normal CD_DISK04_exadatacell01 normal CD_DISK05_exadatacell01 normal CD_DISK06_exadatacell01 normal CD_DISK07_exadatacell01 normal CD_DISK08_exadatacell01 normal CD_DISK09_exadatacell01 normal
CellCLI> list physicaldisk where disktype=HardDisk attributes name /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK00 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK01 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK02 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK03 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK04 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK05 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK06 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK07 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK08 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell01_DISK09
CellCLI> list flashcache detail name: exadatacell01_FLASHCACHE cellDisk: FD_02_exadatacell01,FD_01_exadatacell01,FD_03_exadatacell01,FD_00_exadatacell01 creationTime: 2020-07-07T01:41:55-04:00 degradedCelldisks: effectiveCacheSize: 1.625G id: c0505a54-7de2-4e87-bad5-e65d71bc76a0 size: 1.625G status: normal
Step 8: Create Cell disks and Flash Cache storage for Exadata Storage Server on Node 2
Note:::Assumed you have cloned the VM after 6.4 Steps.
Follow the Step from 6.5 to 7.7 on Node 2. You may check in below.
Add disk or storage on Node 2.
Provide correct VM path name name for Node 2 like below.
Create storage/disk for Node2
cd "c:\Program Files\Oracle\VirtualBox" VBoxManage createhd --filename "C:\VM\exadatacell02\hd1.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd2.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd3.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd4.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd5.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd6.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd7.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd8.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd9.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd10.vdi" --size 1024 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd11.vdi" --size 600 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd12.vdi" --size 600 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd13.vdi" --size 600 --format VDI --variant Fixed VBoxManage createhd --filename "C:\VM\exadatacell02\hd14.vdi" --size 600 --format VDI --variant Fixed
Attach storage on VM
cd "c:\Program Files\Oracle\VirtualBox" VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 1 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd1.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 2 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd2.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 3 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd3.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 4 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd4.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 5 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd5.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 6 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd6.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 7 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd7.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 8 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd8.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 9 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd9.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 10 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd10.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 11 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd11.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 12 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd12.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 13 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd13.vdi" --mtype normal VBoxManage storageattach exadatacell02 --storagectl "SATA" --port 14 --device 0 --type hdd --medium "C:\VM\exadatacell02\hd14.vdi" --mtype normal
[root@exadatacell02 raw]# pwd /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw ln -s /dev/sdb exadatacell02_DISK00 ln -s /dev/sdc exadatacell02_DISK01 ln -s /dev/sdd exadatacell02_DISK02 ln -s /dev/sde exadatacell02_DISK03 ln -s /dev/sdf exadatacell02_DISK04 ln -s /dev/sdg exadatacell02_DISK05 ln -s /dev/sdh exadatacell02_DISK06 ln -s /dev/sdi exadatacell02_DISK07 ln -s /dev/sdj exadatacell02_DISK08 ln -s /dev/sdk exadatacell02_DISK09 ln -s /dev/sdl exadatacell02_FLASH00 ln -s /dev/sdm exadatacell02_FLASH01 ln -s /dev/sdn exadatacell02_FLASH02 ln -s /dev/sdo exadatacell02_FLASH03
[root@exadatacell02 raw]# pwd /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw [root@exadatacell02 raw]# ls -l total 0 lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_DISK00 -> /dev/sdb lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_DISK01 -> /dev/sdc lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_DISK02 -> /dev/sdd lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_DISK03 -> /dev/sde lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_DISK04 -> /dev/sdf lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_DISK05 -> /dev/sdg lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_DISK06 -> /dev/sdh lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_DISK07 -> /dev/sdi lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_DISK08 -> /dev/sdj lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_DISK09 -> /dev/sdk lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_FLASH00 -> /dev/sdl lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_FLASH01 -> /dev/sdm lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_FLASH02 -> /dev/sdn lrwxrwxrwx 1 root root 8 Jul 11 17:10 exadatacell02_FLASH03 -> /dev/sdo
[celladmin@exadatacell02 ~]$ cellcli -e create cell interconnect1=eth1 Cell exadatacell02 successfully created Starting CELLSRV services... The STARTUP of CELLSRV services was successful. Flash cell disks, FlashCache, and FlashLog will be created... CellDisk FD_00_exadatacell02 successfully created CellDisk FD_01_exadatacell02 successfully created CellDisk FD_02_exadatacell02 successfully created CellDisk FD_03_exadatacell02 successfully created Flash log exadatacell02_FLASHLOG successfully created Flash cache exadatacell02_FLASHCACHE successfully created
CellCLI> create celldisk all CellDisk CD_DISK00_exadatacell02 successfully created CellDisk CD_DISK01_exadatacell02 successfully created CellDisk CD_DISK02_exadatacell02 successfully created CellDisk CD_DISK03_exadatacell02 successfully created CellDisk CD_DISK04_exadatacell02 successfully created CellDisk CD_DISK05_exadatacell02 successfully created CellDisk CD_DISK06_exadatacell02 successfully created CellDisk CD_DISK07_exadatacell02 successfully created CellDisk CD_DISK08_exadatacell02 successfully created CellDisk CD_DISK09_exadatacell02 successfully created
CellCLI> list physicaldisk where disktype=HardDisk attributes name /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell02_DISK00 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell02_DISK01 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell02_DISK02 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell02_DISK03 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell02_DISK04 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell02_DISK05 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell02_DISK06 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell02_DISK07 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell02_DISK08 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/exadatacell02_DISK09
Step 9: Create Disk for ASM on Both Node.
80% space of disk will use for DATA and 20% for FRA. Below scripts for creating disk.
GRID DISK for DATA:
[celladmin@exadatacell02 ~]$ vi create_data_grid_disk.cli [celladmin@exadatacell02 ~]$ cat create_data_grid_disk.cli create griddisk DATA_CD_DISK00_exadatacell02 celldisk=CD_DISK00_exadatacell02, size=720m create griddisk DATA_CD_DISK01_exadatacell02 celldisk=CD_DISK01_exadatacell02, size=720m create griddisk DATA_CD_DISK02_exadatacell02 celldisk=CD_DISK02_exadatacell02, size=720m create griddisk DATA_CD_DISK03_exadatacell02 celldisk=CD_DISK03_exadatacell02, size=720m create griddisk DATA_CD_DISK04_exadatacell02 celldisk=CD_DISK04_exadatacell02, size=720m create griddisk DATA_CD_DISK05_exadatacell02 celldisk=CD_DISK05_exadatacell02, size=720m create griddisk DATA_CD_DISK06_exadatacell02 celldisk=CD_DISK06_exadatacell02, size=720m create griddisk DATA_CD_DISK07_exadatacell02 celldisk=CD_DISK07_exadatacell02, size=720m create griddisk DATA_CD_DISK08_exadatacell02 celldisk=CD_DISK08_exadatacell02, size=720m create griddisk DATA_CD_DISK09_exadatacell02 celldisk=CD_DISK09_exadatacell02, size=720m [celladmin@exadatacell02 ~]$ cellcli CellCLI: Release 11.2.3.2.1 - Production on Sat Jul 11 18:17:23 EDT 2020 Copyright (c) 2007, 2012, Oracle. All rights reserved. Cell Efficiency Ratio: 1 CellCLI> @create_data_grid_disk.cli GridDisk DATA_CD_DISK00_exadatacell02 successfully created GridDisk DATA_CD_DISK01_exadatacell02 successfully created GridDisk DATA_CD_DISK02_exadatacell02 successfully created GridDisk DATA_CD_DISK03_exadatacell02 successfully created GridDisk DATA_CD_DISK04_exadatacell02 successfully created GridDisk DATA_CD_DISK05_exadatacell02 successfully created GridDisk DATA_CD_DISK06_exadatacell02 successfully created GridDisk DATA_CD_DISK07_exadatacell02 successfully created GridDisk DATA_CD_DISK08_exadatacell02 successfully created GridDisk DATA_CD_DISK09_exadatacell02 successfully created
CellCLI> list celldisk CD_DISK00_exadatacell02 normal CD_DISK01_exadatacell02 normal CD_DISK02_exadatacell02 normal CD_DISK03_exadatacell02 normal CD_DISK04_exadatacell02 normal CD_DISK05_exadatacell02 normal CD_DISK06_exadatacell02 normal CD_DISK07_exadatacell02 normal CD_DISK08_exadatacell02 normal CD_DISK09_exadatacell02 normal FD_00_exadatacell02 normal FD_01_exadatacell02 normal FD_02_exadatacell02 normal FD_03_exadatacell02 normal
CellCLI> list griddisk DATA_CD_DISK00_exadatacell02 active DATA_CD_DISK01_exadatacell02 active DATA_CD_DISK02_exadatacell02 active DATA_CD_DISK03_exadatacell02 active DATA_CD_DISK04_exadatacell02 active DATA_CD_DISK05_exadatacell02 active DATA_CD_DISK06_exadatacell02 active DATA_CD_DISK07_exadatacell02 active DATA_CD_DISK08_exadatacell02 active DATA_CD_DISK09_exadatacell02 active
Command to drop the GRID DISK
CellCLI> drop griddisk DATA_CD_DISK00_exadatacell02 GridDisk DATA_CD_DISK00_exadatacell02 successfully dropped
GRID DISK for FRA
[celladmin@exadatacell02 ~]$ cat create_fra_grid_disk.cli create griddisk FRA_CD_DISK00_exadatacell02 celldisk=CD_DISK00_exadatacell02, size=180m create griddisk FRA_CD_DISK01_exadatacell02 celldisk=CD_DISK01_exadatacell02, size=180m create griddisk FRA_CD_DISK02_exadatacell02 celldisk=CD_DISK02_exadatacell02, size=180m create griddisk FRA_CD_DISK03_exadatacell02 celldisk=CD_DISK03_exadatacell02, size=180m create griddisk FRA_CD_DISK04_exadatacell02 celldisk=CD_DISK04_exadatacell02, size=180m create griddisk FRA_CD_DISK05_exadatacell02 celldisk=CD_DISK05_exadatacell02, size=180m create griddisk FRA_CD_DISK06_exadatacell02 celldisk=CD_DISK06_exadatacell02, size=180m create griddisk FRA_CD_DISK07_exadatacell02 celldisk=CD_DISK07_exadatacell02, size=180m create griddisk FRA_CD_DISK08_exadatacell02 celldisk=CD_DISK08_exadatacell02, size=180m create griddisk FRA_CD_DISK09_exadatacell02 celldisk=CD_DISK09_exadatacell02, size=180m [celladmin@exadatacell02 ~]$ cellcli CellCLI: Release 11.2.3.2.1 - Production on Sat Jul 11 18:19:29 EDT 2020 Copyright (c) 2007, 2012, Oracle. All rights reserved. Cell Efficiency Ratio: 1 CellCLI> @create_fra_grid_disk.cli GridDisk FRA_CD_DISK00_exadatacell02 successfully created GridDisk FRA_CD_DISK01_exadatacell02 successfully created GridDisk FRA_CD_DISK02_exadatacell02 successfully created GridDisk FRA_CD_DISK03_exadatacell02 successfully created GridDisk FRA_CD_DISK04_exadatacell02 successfully created GridDisk FRA_CD_DISK05_exadatacell02 successfully created GridDisk FRA_CD_DISK06_exadatacell02 successfully created GridDisk FRA_CD_DISK07_exadatacell02 successfully created GridDisk FRA_CD_DISK08_exadatacell02 successfully created GridDisk FRA_CD_DISK09_exadatacell02 successfully created
CellCLI> create griddisk all harddisk prefix='DBFS' GridDisk DBFS_CD_DISK00_exadatacell02 successfully created GridDisk DBFS_CD_DISK01_exadatacell02 successfully created GridDisk DBFS_CD_DISK02_exadatacell02 successfully created GridDisk DBFS_CD_DISK03_exadatacell02 successfully created GridDisk DBFS_CD_DISK04_exadatacell02 successfully created GridDisk DBFS_CD_DISK05_exadatacell02 successfully created GridDisk DBFS_CD_DISK06_exadatacell02 successfully created GridDisk DBFS_CD_DISK07_exadatacell02 successfully created GridDisk DBFS_CD_DISK08_exadatacell02 successfully created GridDisk DBFS_CD_DISK09_exadatacell02 successfully created
[celladmin@exadatacell01 raw]$ cellcli -e list cell detail name: exadatacell01 bbuTempThreshold: 60 bbuChargeThreshold: 800 bmcType: absent cellVersion: OSS_11.2.3.2.1_LINUX.X64_130109 cpuCount: 1 diagHistoryDays: 7 fanCount: 1/1 fanStatus: normal flashCacheMode: WriteThrough id: 75678317-ffa8-49eb-8b12-53e2ce0c7e8f interconnectCount: 2 interconnect1: eth1 iormBoost: 0.0 ipaddress1: 192.168.2.60/24 kernelVersion: 2.6.18-348.el5 makeModel: Fake hardware metricHistoryDays: 7 offloadEfficiency: 1.0 powerCount: 1/1 powerStatus: normal releaseVersion: 11.2.3.2.1 releaseTrackingBug: 14522699 status: online temperatureReading: 0.0 temperatureStatus: normal upTime: 0 days, 7:50 cellsrvStatus: running msStatus: running rsStatus: running
CellCLI> list griddisk DATA_CD_DISK00_exadatacell02 active DATA_CD_DISK01_exadatacell02 active DATA_CD_DISK02_exadatacell02 active DATA_CD_DISK03_exadatacell02 active DATA_CD_DISK04_exadatacell02 active DATA_CD_DISK05_exadatacell02 active DATA_CD_DISK06_exadatacell02 active DATA_CD_DISK07_exadatacell02 active DATA_CD_DISK08_exadatacell02 active DATA_CD_DISK09_exadatacell02 active DBFS_CD_DISK00_exadatacell02 active DBFS_CD_DISK01_exadatacell02 active DBFS_CD_DISK02_exadatacell02 active DBFS_CD_DISK03_exadatacell02 active DBFS_CD_DISK04_exadatacell02 active DBFS_CD_DISK05_exadatacell02 active DBFS_CD_DISK06_exadatacell02 active DBFS_CD_DISK07_exadatacell02 active DBFS_CD_DISK08_exadatacell02 active DBFS_CD_DISK09_exadatacell02 active FRA_CD_DISK00_exadatacell02 active FRA_CD_DISK01_exadatacell02 active FRA_CD_DISK02_exadatacell02 active FRA_CD_DISK03_exadatacell02 active FRA_CD_DISK04_exadatacell02 active FRA_CD_DISK05_exadatacell02 active FRA_CD_DISK06_exadatacell02 active FRA_CD_DISK07_exadatacell02 active FRA_CD_DISK08_exadatacell02 active FRA_CD_DISK09_exadatacell02 active
Create GRID DISK on Node 1:
Script for Node 1
create griddisk DATA_CD_DISK00_exadatacell01 celldisk=CD_DISK00_exadatacell01, size=720m create griddisk DATA_CD_DISK01_exadatacell01 celldisk=CD_DISK01_exadatacell01, size=720m create griddisk DATA_CD_DISK02_exadatacell01 celldisk=CD_DISK02_exadatacell01, size=720m create griddisk DATA_CD_DISK03_exadatacell01 celldisk=CD_DISK03_exadatacell01, size=720m create griddisk DATA_CD_DISK04_exadatacell01 celldisk=CD_DISK04_exadatacell01, size=720m create griddisk DATA_CD_DISK05_exadatacell01 celldisk=CD_DISK05_exadatacell01, size=720m create griddisk DATA_CD_DISK06_exadatacell01 celldisk=CD_DISK06_exadatacell01, size=720m create griddisk DATA_CD_DISK07_exadatacell01 celldisk=CD_DISK07_exadatacell01, size=720m create griddisk DATA_CD_DISK08_exadatacell01 celldisk=CD_DISK08_exadatacell01, size=720m create griddisk DATA_CD_DISK09_exadatacell01 celldisk=CD_DISK09_exadatacell01, size=720m create griddisk FRA_CD_DISK00_exadatacell01 celldisk=CD_DISK00_exadatacell01, size=180m create griddisk FRA_CD_DISK01_exadatacell01 celldisk=CD_DISK01_exadatacell01, size=180m create griddisk FRA_CD_DISK02_exadatacell01 celldisk=CD_DISK02_exadatacell01, size=180m create griddisk FRA_CD_DISK03_exadatacell01 celldisk=CD_DISK03_exadatacell01, size=180m create griddisk FRA_CD_DISK04_exadatacell01 celldisk=CD_DISK04_exadatacell01, size=180m create griddisk FRA_CD_DISK05_exadatacell01 celldisk=CD_DISK05_exadatacell01, size=180m create griddisk FRA_CD_DISK06_exadatacell01 celldisk=CD_DISK06_exadatacell01, size=180m create griddisk FRA_CD_DISK07_exadatacell01 celldisk=CD_DISK07_exadatacell01, size=180m create griddisk FRA_CD_DISK08_exadatacell01 celldisk=CD_DISK08_exadatacell01, size=180m create griddisk FRA_CD_DISK09_exadatacell01 celldisk=CD_DISK09_exadatacell01, size=180m
Note::: Storage Server Configuration is almost done. Now I will configure Compute Node.
Configure Compute / DB Node: Exadata Simulation Part II
11 comments
Skip to comment form
Nice post !!!
Could you please let me know the startup and shutdown sequence of the exadata setup??Because,when i start the vms, i get crsd process error.
Author
Hi Kishan,
Thanks! If possible please share error details.
You can follow below sequences:
To startup:
1. Start Cell Servers
2. Validate these process are running or not.
RS – Restart server process and MS – Management Server process
3. Start DB / Compute Nodes
To shutdown:
1. Stop DB and CRS (for RAC) on Compute Nodes
2. Shutdown Compute Nodes
3. Shutdown Cell Nodes (you can stop all services)
Please let me know if you have any further query.
Thanks,
Mohammad Samad
Hi Mohammad,
I tried starting the setup like you have mentioned.Whenever i install the setup, i am able to access the crsd and the database was working fine.Once i restart the setup ,i face crs communication errors.I am able to ping the cellnode from compute node.My cellinit.ora and cellip.ora are working fine.
Appreciate your help !! Thanks in advance
cellnode:
[root@exceladm00 ~]# lsmod|grep rds
rds_rdma 106561 0
rds_tcp 48097 0
rds 155561 224 rds_rdma,rds_tcp
rdma_cm 73429 2 rds_rdma,ib_iser
ib_core 108097 8 rds_rdma,ib_iser,rdma_cm,ib_cm,iw_cm,ib_sa,ib_ma d,iw_cxgb3
===============================================================
cellsrvStatus: running
msStatus: running
rsStatus: running
===============================================================
CellCLI> list griddisk
DATA_CD_cell01_exceladm00 active
DATA_CD_cell02_exceladm00 active
DATA_CD_cell03_exceladm00 active
DATA_CD_cell04_exceladm00 active
DATA_CD_cell05_exceladm00 active
DATA_CD_cell06_exceladm00 active
FRA_CD_cell07_exceladm00 active
FRA_CD_cell08_exceladm00 active
FRA_CD_cell09_exceladm00 active
FRA_CD_cell10_exceladm00 active
OCR_CD_cell11_exceladm00 active
OCR_CD_cell12_exceladm00 active
==================================================================
compute node:
[oracle@exdbadm01 ~]$ srvctl status asm
PRCR-1070 : Failed to check if resource ora.asm is registered
Cannot communicate with crsd
[root@exdbadm01 oracle]# crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
[root@exdbadm01 oracle]# crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
[root@exdbadm01 oracle]# crsctl start has
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
=========================================================
alertlog
[cssd(9547)]CRS-1707:Lease acquisition for node exdbadm01 number 1 completed
2020-08-13 22:01:16.294:
[cssd(9547)]CRS-1605:CSSD voting file is online: o/192.168.56.69/OCR_CD_cell11_exceladm00; details in /apps01/home/11.2.0/grid/log/exdbadm01/cssd/ocssd.log.
2020-08-13 22:06:18.261:
[cssd(9547)]CRS-1632:Node exdbadm02 is being removed from the cluster in cluster incarnation 492820879
2020-08-13 22:06:18.873:
[cssd(9547)]CRS-1601:CSSD Reconfiguration complete. Active nodes are exdbadm01 .
2020-08-13 22:11:07.680:
[cssd(9547)]CRS-1656:The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /apps01/home/11.2.0/grid/log/exdbadm01/cssd/ocssd.log
2020-08-13 22:11:07.681:
[cssd(9547)]CRS-1603:CSSD on node exdbadm01 shutdown by user.
2020-08-13 22:11:07.680:
[/apps01/home/11.2.0/grid/bin/cssdagent(9536)]CRS-5818:Aborted command ‘start’ for resource ‘ora.cssd’. Details at (:CRSAGF00113:) {0:0:192} in /apps01/home/11.2.0/grid/log/exdbadm01/agent/ohasd/oracssdagent_root/oracssdagent_root.log.
2020-08-13 22:11:12.805:
[ohasd(2875)]CRS-2765:Resource ‘ora.cssdmonitor’ has failed on server ‘exdbadm01’.
2020-08-13 22:11:14.641:
[cssd(10590)]CRS-1713:CSSD daemon is started in clustered mode
2020-08-13 22:11:16.215:
[ohasd(2875)]CRS-2767:Resource state recovery not attempted for ‘ora.diskmon’ as its target state is OFFLINE
2020-08-13 22:11:16.215:
[ohasd(2875)]CRS-2769:Unable to failover resource ‘ora.diskmon’.
2020-08-13 22:11:20.690:
[cssd(10590)]CRS-1707:Lease acquisition for node exdbadm01 number 1 completed
2020-08-13 22:11:22.047:
[cssd(10590)]CRS-1605:CSSD voting file is online: o/192.168.56.69/OCR_CD_cell11_exceladm00; details in /apps01/home/11.2.0/grid/log/exdbadm01/cssd/ocssd.log.
2020-08-13 22:16:24.054:
[cssd(10590)]CRS-1632:Node exdbadm02 is being removed from the cluster in cluster incarnation 492820883
2020-08-13 22:16:24.968:
[cssd(10590)]CRS-1601:CSSD Reconfiguration complete. Active nodes are exdbadm01 .
2020-08-13 22:21:13.594:
[cssd(10590)]CRS-1656:The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /apps01/home/11.2.0/grid/log/exdbadm01/cssd/ocssd.log
2020-08-13 22:21:13.595:
[cssd(10590)]CRS-1603:CSSD on node exdbadm01 shutdown by user.
2020-08-13 22:21:13.594:
[/apps01/home/11.2.0/grid/bin/cssdagent(10579)]CRS-5818:Aborted command ‘start’ for resource ‘ora.cssd’. Details at (:CRSAGF00113:) {0:0:263} in /apps01/home/11.2.0/grid/log/exdbadm01/agent/ohasd/oracssdagent_root/oracssdagent_root.log.
2020-08-13 22:21:18.937:
[ohasd(2875)]CRS-2765:Resource ‘ora.cssdmonitor’ has failed on server ‘exdbadm01’.
2020-08-13 22:21:20.729:
[cssd(12070)]CRS-1713:CSSD daemon is started in clustered mode
2020-08-13 22:21:22.318:
[ohasd(2875)]CRS-2767:Resource state recovery not attempted for ‘ora.diskmon’ as its target state is OFFLINE
2020-08-13 22:21:22.318:
[ohasd(2875)]CRS-2769:Unable to failover resource ‘ora.diskmon’.
2020-08-13 22:21:26.817:
[cssd(12070)]CRS-1707:Lease acquisition for node exdbadm01 number 1 completed
2020-08-13 22:21:28.181:
[cssd(12070)]CRS-1605:CSSD voting file is online: o/192.168.56.69/OCR_CD_cell11_exceladm00; details in /apps01/home/11.2.0/grid/log/exdbadm01/cssd/ocssd.log.
============================================================================
ocssd.log
2020-08-13 22:19:57.824: [ default][3070974272]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 22:19:57.824: [ default][3070974272]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 22:19:58.048: [ CSSD][3066243392]clssnmSendingThread: sending status msg to all nodes
2020-08-13 22:19:58.048: [ CSSD][3066243392]clssnmSendingThread: sent 4 status msgs to all nodes
2020-08-13 22:19:58.329: [ default][3070974272]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 22:19:58.329: [ default][3070974272]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 22:19:58.832: [ default][3070974272]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 22:19:58.832: [ default][3070974272]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 22:19:59.336: [ default][3070974272]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 22:19:59.336: [ default][3070974272]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 22:19:59.841: [ default][3070974272]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 22:19:59.841: [ default][3070974272]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 22:20:00.344: [ default][3070974272]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 22:20:00.344: [ default][3070974272]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 22:20:00.849: [ default][3070974272]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 22:20:00.849: [ default][3070974272]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 22:20:01.353: [ default][3070974272]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 22:20:01.353: [ default][3070974272]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 22:20:01.856: [ default][3070974272]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 22:20:01.856: [ default][3070974272]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 22:20:02.065: [ CSSD][3066243392]clssnmSendingThread: sending status msg to all nodes
2020-08-13 22:20:02.065: [ CSSD][3066243392]clssnmSendingThread: sent 4 status msgs to all nodes
2020-08-13 22:20:02.360: [ default][3070974272]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 22:20:02.360: [ default][3070974272]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 22:20:02.864: [ default][3070974272]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 22:20:02.864: [ default][3070974272]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
=========================================================================
[root@exdbadm01 oracle]# /grid/stage/ext/bin/kfod op=disks disk=all
——————————————————————————–
Disk Size Path User Group
================================================================================
1: 976 Mb o/192.168.56.69/DATA_CD_cell01_exceladm00
2: 976 Mb o/192.168.56.69/DATA_CD_cell02_exceladm00
3: 976 Mb o/192.168.56.69/DATA_CD_cell03_exceladm00
4: 976 Mb o/192.168.56.69/DATA_CD_cell04_exceladm00
5: 976 Mb o/192.168.56.69/DATA_CD_cell05_exceladm00
6: 976 Mb o/192.168.56.69/DATA_CD_cell06_exceladm00
7: 976 Mb o/192.168.56.69/FRA_CD_cell07_exceladm00
8: 976 Mb o/192.168.56.69/FRA_CD_cell08_exceladm00
9: 976 Mb o/192.168.56.69/FRA_CD_cell09_exceladm00
10: 976 Mb o/192.168.56.69/FRA_CD_cell10_exceladm00
11: 976 Mb o/192.168.56.69/OCR_CD_cell11_exceladm00
12: 976 Mb o/192.168.56.69/OCR_CD_cell12_exceladm00
Author
Hi Kishan,
Thanks for details. It seems like Compute node able to access the Storage.
could you please check below from compute nodes?
[grid@exadatadb01 bin]$ export LD_LIBRARY_PATH=/u01/software/grid/stage/ext/lib
[grid@exadatadb01 bin]$ /u01/software/grid/stage/ext/bin/kfod disks=all op=disks
This command should return your all disk from all storage cell.
Thanks!
Hi Mohammad,
Thanks for the reply.
After starting the cellnode and services.I started the compute node and when i check the crsctl status ,i am getting the below error!! I couldnot find any recent logs from crsd.Please find the logs.
Could you please let me know how the compute node griddisks communicate with cell node celldisks,when we startup the nodes.I know that cellip.ora and cellinit.ora files are used for communication between compute and cell nodes.
But whenever i install and configure the setup ,crsd services ,grid and database are working fine.But after i restart the nodes ,i face crsd issues.Appreciate you help!!
Thanks in advance !!
cellnode:
[root@exceladm00 ~]# lsmod|grep rds
rds_rdma 106561 0
rds_tcp 48097 0
rds 155561 224 rds_rdma,rds_tcp
rdma_cm 73429 2 rds_rdma,ib_iser
ib_core 108097 8 rds_rdma,ib_iser,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad,iw_cxgb3
=======================================================================
CellCLI> list cell detail
cellsrvStatus: running
msStatus: running
rsStatus: running
========================================================================
CellCLI> list griddisk
DATA_CD_cell01_exceladm00 active
DATA_CD_cell02_exceladm00 active
DATA_CD_cell03_exceladm00 active
DATA_CD_cell04_exceladm00 active
DATA_CD_cell05_exceladm00 active
DATA_CD_cell06_exceladm00 active
FRA_CD_cell07_exceladm00 active
FRA_CD_cell08_exceladm00 active
FRA_CD_cell09_exceladm00 active
FRA_CD_cell10_exceladm00 active
OCR_CD_cell11_exceladm00 active
OCR_CD_cell12_exceladm00 active
======================================================================
Compute node:
[oracle@exdbadm01 ~]$ srvctl status asm
PRCR-1070 : Failed to check if resource ora.asm is registered
Cannot communicate with crsd
======================================================================
[root@exdbadm01 oracle]# crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
===========================================================
[root@exdbadm01 oracle]# crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
==========================================================
[root@exdbadm01 oracle]# crsctl start has
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
==========================================================
ohas.log
2020-08-13 22:01:07.635: [ CRSCOMM][3232549184] IpcL: Found connection in pending connections
2020-08-13 22:01:07.635: [ CRSCOMM][3232549184] IpcL: Adding connection: 51
2020-08-13 22:01:07.635: [CLSFRAME][3232549184] New IPC Member:{Relative|Node:0|Process:51|Type:3}:AGENT username=root
2020-08-13 22:01:07.635: [CLSFRAME][3232549184] New process connected to us ID:{Relative|Node:0|Process:51|Type:3} Info:AGENT
2020-08-13 22:01:07.636: [ AGFW][3238852928]{0:51:2} Agfw Proxy Server received the message: AGENT_HANDSHAKE[Proxy] ID 20484:11
2020-08-13 22:01:07.636: [ AGFW][3238852928]{0:51:2} Expected username [root] actual [root] for:/apps01/home/11.2.0/grid/bin/cssdagent_root
2020-08-13 22:01:07.636: [ AGFW][3238852928]{0:51:2} Agent /apps01/home/11.2.0/grid/bin/cssdagent_root with pid:9536 connected to server.
2020-08-13 22:01:07.637: [ AGFW][3238852928]{0:51:2} Agfw Proxy Server sending message: RESTYPE_ADD[ora.cssd.type] ID 8196:1863 to the agent /apps01/home/11.2.0/grid/bin/cssdagent_root
2020-08-13 22:01:07.637: [ AGFW][3238852928]{0:51:2} Agfw Proxy Server sending message: RESOURCE_ADD[ora.cssd 1 1] ID 4356:1864 to the agent /apps01/home/11.2.0/grid/bin/cssdagent_root
2020-08-13 22:01:07.637: [ AGFW][3238852928]{0:0:192} Agfw Proxy Server forwarding the message: RESOURCE_START[ora.cssd 1 1] ID 4098:1841 to the agent /apps01/home/11.2.0/grid/bin/cssdagent_root
2020-08-13 22:01:07.638: [ AGFW][3238852928]{0:0:192} Agfw Proxy Server replying to the message: AGENT_HANDSHAKE[Proxy] ID 20484:11
2020-08-13 22:01:07.702: [ AGFW][3238852928]{0:51:2} Received the reply to the message: RESTYPE_ADD[ora.cssd.type] ID 8196:1863 from the agent /apps01/home/11.2.0/grid/bin/cssdagent_root
2020-08-13 22:01:07.703: [ AGFW][3238852928]{0:51:2} Received the reply to the message: RESOURCE_ADD[ora.cssd 1 1] ID 4356:1864 from the agent /apps01/home/11.2.0/grid/bin/cssdagent_root
2020-08-13 22:01:10.249: [ AGFW][3238852928]{0:11:83} Agfw Proxy Server received the message: RESOURCE_STATUS[Proxy] ID 20481:662
2020-08-13 22:01:10.249: [ AGFW][3238852928]{0:11:83} Verifying msg rid = ora.diskmon 1 1
2020-08-13 22:01:10.249: [ AGFW][3238852928]{0:11:83} Received state change for ora.diskmon 1 1 [old state = ONLINE, new state = PLANNED_OFFLINE]
2020-08-13 22:01:10.250: [ AGFW][3238852928]{0:11:83} Agfw Proxy Server sending message to PE, Contents = [MIDTo:2|OpID:3|FromA:{Invalid|Node:0|Process:0|Type:0}|ToA:{Invalid|Node:-1|Process:-1|Type:-1}|MIDFrom:0|Type:4|Pri2|Id:1870:Ver:2]
2020-08-13 22:01:10.250: [ AGFW][3238852928]{0:11:83} Agfw Proxy Server replying to the message: RESOURCE_STATUS[Proxy] ID 20481:662
2020-08-13 22:01:10.250: [ CRSPE][3249359168]{0:11:83} State change received from exdbadm01 for ora.diskmon 1 1
2020-08-13 22:01:10.250: [ CRSPE][3249359168]{0:11:83} Processing PE command id=117. Description: [Resource State Change (ora.diskmon 1 1) : 0x920ee50]
2020-08-13 22:01:10.250: [ CRSPE][3249359168]{0:11:83} RI [ora.diskmon 1 1] new external state [OFFLINE] old value: [ONLINE] on exdbadm01 label = []
2020-08-13 22:01:10.250: [ CRSPE][3249359168]{0:11:83} RI [ora.diskmon 1 1] new target state: [OFFLINE] old value: [ONLINE]
2020-08-13 22:01:10.251: [ CRSPE][3249359168]{0:11:83} Processing unplanned state change for [ora.diskmon 1 1]
2020-08-13 22:01:10.251: [ INIT][3249359168]{0:11:83} {0:11:83} Target is not ONLINE, not recovering [ora.diskmon 1 1]
2020-08-13 22:01:10.251: [ INIT][3249359168]{0:11:83} {0:11:83} Created alert : (:CRSPE00191:) : Failover cannot be completed for [ora.diskmon 1 1]. Stopping it and the resource tree
2020-08-13 22:01:10.251: [ CRSPE][3249359168]{0:11:83} Op 0x920ddf0 has 4 WOs
2020-08-13 22:01:10.252: [ CRSPE][3249359168]{0:11:50} Re-evaluation of queued op [STOP of [ora.diskmon 1 1] on [exdbadm01] : 0x9372ca0]. found it no longer needed:CRS-2506: Operation on ‘STOP of [ora.diskmon 1 1] on [exdbadm01] : 0x9372ca0’ has been cancelled
. Finishing the op.
2020-08-13 22:01:10.252: [ CRSPE][3249359168]{0:11:50} PE Command [ Resource State Change (ora.diskmon 1 1) : 0x93af1e0 ] has completed
2020-08-13 22:01:10.253: [ CRSPE][3249359168]{0:11:73} Re-evaluation of queued op [STOP of [ora.diskmon 1 1] on [exdbadm01] : 0x9159950]. found it no longer needed:CRS-2506: Operation on ‘STOP of [ora.diskmon 1 1] on [exdbadm01] : 0x9159950’ has been cancelled
. Finishing the op.
2020-08-13 22:01:10.253: [ CRSPE][3249359168]{0:11:73} PE Command [ Resource State Change (ora.diskmon 1 1) : 0x9372a30 ] has completed
2020-08-13 22:01:10.257: [ CRSPE][3249359168]{0:11:73} ICE has queued an operation. Details: Operation [STOP of [ora.diskmon 1 1] on [exdbadm01] : 0x920ddf0] cannot run cause it needs R lock for: WO for Placement Path RI:[ora.diskmon 1 1] server [] target states [OFFLINE ], locked by op [START of [ora.crsd 1 1] on [exdbadm01] : local=0, unplanned=00x97a57a0]. Owner: CRS-2682: It is locked by ‘root’ for command ‘Start Resource’ issued from ‘exdbadm01’
2020-08-13 22:01:10.257: [ CRSOCR][3240954176]{0:11:83} Multi Write Batch processing…
2020-08-13 22:01:10.258: [ CRSOCR][3240954176]{0:11:83} Setting value for key OHASD.RESOURCES.ora!diskmon.INTERNAL
2020-08-13 22:01:10.258: [ AGFW][3238852928]{0:11:50} Agfw Proxy Server received the message: CMD_COMPLETED[Proxy] ID 20482:1872
2020-08-13 22:01:10.258: [ AGFW][3238852928]{0:11:50} Agfw Proxy Server replying to the message: CMD_COMPLETED[Proxy] ID 20482:1872
2020-08-13 22:01:10.258: [ AGFW][3238852928]{0:11:50} Agfw received reply from PE for resource state change for ora.diskmon 1 1
2020-08-13 22:01:10.258: [ AGFW][3238852928]{0:11:73} Agfw Proxy Server received the message: CMD_COMPLETED[Proxy] ID 20482:1873
2020-08-13 22:01:10.258: [ AGFW][3238852928]{0:11:73} Agfw Proxy Server replying to the message: CMD_COMPLETED[Proxy] ID 20482:1873
2020-08-13 22:01:10.258: [ AGFW][3238852928]{0:11:73} Agfw received reply from PE for resource state change for ora.diskmon 1 1
2020-08-13 22:01:10.260: [ CRSOCR][3240954176]{0:11:83} Multi Write Batch done.
==========================================================
cssd logs:
2020-08-13 21:58:49.864: [ CSSD][798902592]clssnmSendingThread: sending status msg to all nodes
2020-08-13 21:58:49.865: [ CSSD][798902592]clssnmSendingThread: sent 4 status msgs to all nodes
2020-08-13 21:58:49.989: [ default][803633472]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 21:58:49.989: [ default][803633472]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 21:58:50.493: [ default][803633472]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 21:58:50.493: [ default][803633472]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 21:58:50.996: [ default][803633472]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 21:58:50.997: [ default][803633472]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 21:58:51.502: [ default][803633472]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 21:58:51.502: [ default][803633472]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 21:58:52.005: [ default][803633472]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 21:58:52.005: [ default][803633472]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 21:58:52.510: [ default][803633472]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 21:58:52.510: [ default][803633472]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 21:58:53.014: [ default][803633472]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 21:58:53.014: [ default][803633472]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
2020-08-13 21:58:53.517: [ default][803633472]kgzf_dskm_conn2: skgznp_connect(default pipe) failed with error 56815
2020-08-13 21:58:53.517: [ default][803633472]kgzf_dskm_conn2: error 56815 at location skgznpcon6 | connect() – No such file or directory
===============================================================================
trace output
failure occurred at: sskgxplp
additional information: Invalid protocol requested (2) or protocol not loaded.
Could you please share me bashrc file and env while starting the compute and cell nodes?? How does the RDS is used to communicate with cellnode from compute node to discover the asm griddisks after rebounce of server??
Author
Hi Kishan,
If you add below line to /etc/modprobe.d/rds.conf, RDS will be loaded during compute nodes startup.
[root@exadatadb01 ~]# cat /etc/modprobe.d/rds.conf
install rds /sbin/modprobe –ignore-install rds && /sbin/modprobe rds_tcp && /sbin/modprobe rds_rdma
Thanks
Samad
Hi , Below are the detail of my configuration .
compute node
============
[root@dbm01 ~]# clear
[root@dbm01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.131 dbm01.oracle.com dbm01 ===> Compute Node )
192.168.1.150 celldb01.oracle.com celldb01 ===> storage Node
[root@dbm01 ~]# hostname
dbm01.oracle.com
[root@dbm01 ~]# lsmod |grep rds
[root@dbm01 ~]# modprobe rds;modprobe rds_tcp;modprobe rds_rdma
[root@dbm01 ~]# modprobe rds
[root@dbm01 ~]# modprobe rds_tcp
[root@dbm01 ~]# modprobe rds_rdma
[root@dbm01 ~]# vi /etc/modprobe.d/rds.conf
[root@dbm01 ~]# lsmod |grep rds
rds_rdma 131072 1
rdma_cm 53248 1 rds_rdma
ib_ipoib 114688 1 rds_rdma
ib_cm 65536 3 rds_rdma,rdma_cm,ib_ipoib
ib_core 102400 7 rds_rdma,rdma_cm,iw_cm,ib_ipoib,ib_cm,ib_sa,ib_mad
rds_tcp 24576 0
rds 266240 2 rds_rdma,rds_tcp
ipv6 417792 33 rds_rdma,ib_addr,rds_tcp,[permanent]
[[grid@dbm01 ~]$ cat /etc/oracle/cell/network-config/cellip.ora
cell=”192.168.1.150″
[grid@dbm01 ~]$ cat /etc/oracle/cell/network-config/cellinit.ora
CELL Initialization Parameters
ipaddress1=192.168.1.131/24
_cell_print_all_params=true
_skgxp_gen_rpc_timeout_in_sec=90
_skgxp_gen_ant_off_rpc_timeout_in_sec=300
_skgxp_udp_interface_detection_time_secs=15
_skgxp_udp_use_tcb_client=true
_skgxp_udp_use_tcb=false
_reconnect_to_cell_attempts=5
[root@dbm01 network-config]# su – grid
[grid@dbm01 ~]$ export LD_LIBRARY_PATH=/tmp/grid/stage/ext/lib
[grid@dbm01 ~]$ /tmp/grid/stage/ext/bin/kfod disks=all op=disks ===> Not able to Detect Disks
Storage Node :
===========
[root@celldb01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.1.150 celldb01.oracle.com celldb01
192.168.1.131 dbm01.oracle.com dbm01
[root@celldb01 ~]# hostname
celldb01.oracle.com
[root@celldb01 ~]# lsmod |grep rds
rds_rdma 112824 0
rdma_cm 63121 1 rds_rdma
ib_core 82430 6 rds_rdma,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
rds_tcp 10424 1
rds 111157 4 rds_rdma,rds_tcp
Mon Dec 07 02:40:33 2020
Cannot find management IP in cell_disk_config.xml file ==> Error in alert log of storage server
CellCLI> list cell detail
name: exadbcell01
bbuTempThreshold: 60
bbuChargeThreshold: 800
bmcType: absent
cellVersion: OSS_11.2.3.2.1_LINUX.X64_130109
cpuCount: 2
diagHistoryDays: 7
fanCount: 1/1
fanStatus: normal
flashCacheMode: WriteThrough
id: 9589b455-d0ee-4c3f-93b5-056c3a160b26
interconnectCount: 1
iormBoost: 0.0
ipaddress1: 192.168.1.150/24
kernelVersion: 3.8.13-16.2.1.el6uek.x86_64
makeModel: Fake hardware
metricHistoryDays: 7
offloadEfficiency: 1.0
powerCount: 1/1
powerStatus: normal
releaseVersion: 11.2.3.2.1
releaseTrackingBug: 14522699
status: online
temperatureReading: 0.0
temperatureStatus: normal
upTime: 0 days, 0:31
cellsrvStatus: running
msStatus: running
rsStatus: running
Can you please look on my configuration able to help to identified the issue. ?
Issue 1 : Not able to Detect Disks in cell server
Issue 2 : Cannot find management IP in cell_disk_config.xml file ==> Error in alert log of storage server ..
Thanks in advance
Hi ,
I am able to solved the issue 1 and 2 by providing IP in /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/ cellinit.ora before creating New cell. (my mistake was that i am creating cell before adding IP in cellinit.ora)
cellcli -e create cell interconnect1=eth2 (192.168.2.60).
By the way Thanks for good document.
Author
Thanks Suresh. I’m glad that this article helped you and enjoyed!
Hi there,
I have configured two cell nodes and created GRID disk successfully. However I cannot detect these grid disk from DB Node.
Command : /u01/app/19.3.0/grid/bin/kfod disks=all op=disks
Above command get’s hung , does not return
[root@exadatadb01 bin]# more /etc/oracle/cell/network-config/cellinit.ora
ipaddress1=192.168.2.80/24
_cell_print_all_params=true
_skgxp_gen_rpc_timeout_in_sec=90
_skgxp_gen_ant_off_rpc_timeout_in_sec=300
_skgxp_udp_interface_detection_time_secs=15
_skgxp_udp_use_tcb_client=true
_skgxp_udp_use_tcb=false
_reconnect_to_cell_attempts=5
[root@exadatadb01 /]# cat /etc/oracle/cell/network-config/cellip.ora
cell=”192.168.2.60″
cell=”192.168.2.70″
Is there any way to find where is the issue ?