A node in the cluster may crash due to hardware or OS problems and is not recoverable. So DBA needs to delete a problematic node from clusterware. In this topic I will show discuss how to delete a node from two nodes RAC.
Two Nodes: ocmnode1 will be deleted
- ocmnode1
- ocmnode2
Deinstall Oracle Home:
[oracle@ocmnode1 ~]$ . ./.bash_profile [oracle@ocmnode1 ~]$ echo $ORACLE_HOME /u01/app/oracle/product/12.1.0/dbhome_1 [oracle@ocmnode1 ~]$ $ORACLE_HOME/deinstall/deinstall -local Checking for required files and bootstrapping ... Please wait ... Location of logs /u01/app/oraInventory/logs/ ############ ORACLE DECONFIG TOOL START ############ ######################### DECONFIG CHECK OPERATION START ######################### ## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/oracle/product/12.1.0/dbhome_1 Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database Oracle Base selected for deinstall is: /u01/app/oracle Checking for existence of central inventory location /u01/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home The following nodes are part of this cluster: ocmnode1 Checking for sufficient temp space availability on node(s) : 'ocmnode1' ## [END] Install check configuration ## Network Configuration check config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2020-08-08_11-28-25-AM.log Network Configuration check config END Database Check Configuration START Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2020-08-08_11-28-33-AM.log Use comma as separator when specifying list of values as input Specify the list of database names that are configured locally on this node for this Oracle home. Local configurations of the discovered databases will be removed []: Database Check Configuration END Oracle Configuration Manager check START OCM check log file location : /u01/app/oraInventory/logs//ocm_check8522.log Oracle Configuration Manager check END ######################### DECONFIG CHECK OPERATION END ######################### ####################### DECONFIG CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: The following nodes are part of this cluster: ocmnode1 The cluster node(s) on which the Oracle home deinstallation will be performed are:ocmnode1 Oracle Home selected for deinstall is: /u01/app/oracle/product/12.1.0/dbhome_1 Inventory Location where the Oracle home registered is: /u01/app/oraInventory Checking the config status for CCR Oracle Home exists with CCR directory, but CCR is not configured CCR check is finished Do you want to continue (y - yes, n - no)? [n]: y A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2020-08-08_11-28-24-AM.out' Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2020-08-08_11-28-24-AM.err' ######################## DECONFIG CLEAN OPERATION START ######################## Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2020-08-08_11-36-02-AM.log Network Configuration clean config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2020-08-08_11-36-02-AM.log Network Configuration clean config END Oracle Configuration Manager clean START OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean8522.log Oracle Configuration Manager clean END ######################### DECONFIG CLEAN OPERATION END ######################### ####################### DECONFIG CLEAN OPERATION SUMMARY ####################### Cleaning the config for CCR As CCR is not configured, so skipping the cleaning of CCR configuration CCR clean is finished ####################################################################### ############# ORACLE DECONFIG TOOL END ############ Using properties file /tmp/deinstall2020-08-08_11-28-16AM/response/deinstall_2020-08-08_11-28-24-AM.rsp Location of logs /u01/app/oraInventory/logs/ ############ ORACLE DEINSTALL TOOL START ############ ####################### DEINSTALL CHECK OPERATION SUMMARY ####################### A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2020-08-08_11-28-24-AM.out' Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2020-08-08_11-28-24-AM.err' ######################## DEINSTALL CLEAN OPERATION START ######################## ## [START] Preparing for Deinstall ## Setting LOCAL_NODE to ocmnode1 Setting CLUSTER_NODES to ocmnode1 Setting CRS_HOME to false Setting oracle.installer.invPtrLoc to /tmp/deinstall2020-08-08_11-28-16AM/oraInst.loc Setting oracle.installer.local to true ## [END] Preparing for Deinstall ## Setting the force flag to false Setting the force flag to cleanup the Oracle Base Oracle Universal Installer clean START Detach Oracle home '/u01/app/oracle/product/12.1.0/dbhome_1' from the central inventory on the local node : Done Delete directory '/u01/app/oracle/product/12.1.0/dbhome_1' on the local node : Done The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is not empty. Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2020-08-08_11-28-16AM' on node 'ocmnode1' ## [END] Oracle install clean ## ######################### DEINSTALL CLEAN OPERATION END ######################### ####################### DEINSTALL CLEAN OPERATION SUMMARY ####################### Successfully detached Oracle home '/u01/app/oracle/product/12.1.0/dbhome_1' from the central inventory on the local node. Successfully deleted directory '/u01/app/oracle/product/12.1.0/dbhome_1' on the local node. Oracle Universal Installer cleanup was successful. Oracle deinstall tool successfully cleaned up temporary directories. ####################################################################### ############# ORACLE DEINSTALL TOOL END #############
Check the pinned status of nodes which you want to delete from either root user or the user that Installed Oracle Clusterware:
Update Inventory for Oracle Home:
[oracle@ocmnode2 bin]$ pwd /u01/app/oracle/product/12.1.0/dbhome_1/oui/bin [oracle@ocmnode2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 "CLUSTER_NODES={ocmnode2}" Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4031 MB Passed The inventory pointer is located at /etc/oraInst.loc 'UpdateNodeList' was successful.
[root@ocmnode1 trace]# olsnodes -s -t PRCO-19: Failure retrieving list of nodes in the cluster PRCO-2: Unable to communicate with the clusterware [grid@ocmnode2 dev]$ olsnodes -s -t ocmnode1 Inactive Unpinned ocmnode2 Active Unpinned
Deconfigure the Clusterware:
On the node you want to delete, run the following command as the user that installed Oracle Clusterware:
[root@ocmnode1 ~]# /u01/app/12.1.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params PRCR-1070 : Failed to check if resource ora.net1.network is registered CRS-0184 : Cannot communicate with the CRS daemon. PRCR-1070 : Failed to check if resource ora.helper is registered CRS-0184 : Cannot communicate with the CRS daemon. PRCR-1070 : Failed to check if resource ora.ons is registered CRS-0184 : Cannot communicate with the CRS daemon. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'ocmnode1' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'ocmnode1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'ocmnode1' CRS-2673: Attempting to stop 'ora.gpnpd' on 'ocmnode1' CRS-2673: Attempting to stop 'ora.evmd' on 'ocmnode1' CRS-2673: Attempting to stop 'ora.gipcd' on 'ocmnode1' CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'ocmnode1' CRS-2677: Stop of 'ora.drivers.acfs' on 'ocmnode1' succeeded CRS-2677: Stop of 'ora.cssdmonitor' on 'ocmnode1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'ocmnode1' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'ocmnode1' succeeded CRS-2677: Stop of 'ora.evmd' on 'ocmnode1' succeeded CRS-2677: Stop of 'ora.gipcd' on 'ocmnode1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ocmnode1' has completed CRS-4133: Oracle High Availability Services has been stopped. 2020/08/08 10:51:31 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector. 2020/08/08 10:51:55 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector. 2020/08/08 10:51:55 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node
Delete clusterware configuration from other running nodes:
[root@ocmnode2 ~]# /u01/app/12.1.0/grid/bin/crsctl delete node -n ocmnode1 CRS-4661: Node ocmnode1 successfully deleted.
Verify the Clusterware Status:
[root@ocmnode2 ~]# . oraenv ORACLE_SID = [root] ? +ASM2 The Oracle base has been set to /u01/app/grid [root@ocmnode2 ~]# crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE ocmnode2 ora.FRA.dg ora....up.type 0/5 0/ ONLINE ONLINE ocmnode2 ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE ocmnode2 ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE ocmnode2 ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE ocmnode2 ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE ocmnode2 ora.MGMTLSNR ora....nr.type 0/0 0/0 ONLINE ONLINE ocmnode2 ora.OCR.dg ora....up.type 0/5 0/ ONLINE ONLINE ocmnode2 ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE ocmnode2 ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE ocmnode2 ora.mgmtdb ora....db.type 0/2 0/1 ONLINE ONLINE ocmnode2 ora....network ora....rk.type 0/5 0/ ONLINE ONLINE ocmnode2 ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE ocmnode2 ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE ocmnode2 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE ocmnode2 ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE ocmnode2 ora....de2.ons application 0/3 0/0 ONLINE ONLINE ocmnode2 ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE ocmnode2 ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE ocmnode2 ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE ocmnode2 ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE ocmnode2 ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE ocmnode2
[root@ocmnode2 ~]# crsctl check cluster -all ************************************************************** ocmnode2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online **************************************************************
Update Inventory on Grid Home:
[grid@ocmnode1 ~]$ . oraenv ORACLE_SID = [grid] ? +ASM1 The Oracle base has been set to /u01/app/grid [grid@ocmnode1 ~]$ echo $ORACLE_HOME /u01/app/12.1.0/grid [grid@ocmnode1 ~]$ cd $ORACLE_HOME [grid@ocmnode1 grid]$ cd oui/bin [grid@ocmnode1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={ocmnode1}" CRS=TRUE -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4031 MB Passed The inventory pointer is located at /etc/oraInst.loc 'UpdateNodeList' was successful.
Deinstall Grid Home:
[grid@ocmnode1 bin]$ cd /u01/app/12.1.0/grid/deinstall [grid@ocmnode1 deinstall]$ ./deinstall -local Checking for required files and bootstrapping ... Please wait ... Location of logs /tmp/deinstall2020-08-08_11-46-53AM/logs/ ############ ORACLE DECONFIG TOOL START ############ ######################### DECONFIG CHECK OPERATION START ######################### ## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/12.1.0/grid Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster Oracle Base selected for deinstall is: /u01/app/grid Checking for existence of central inventory location /u01/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home The following nodes are part of this cluster: ocmnode1 Checking for sufficient temp space availability on node(s) : 'ocmnode1' ## [END] Install check configuration ## Traces log file: /tmp/deinstall2020-08-08_11-46-53AM/logs//crsdc_2020-08-08_11-47-10AM.log Network Configuration check config START Network de-configuration trace file location: /tmp/deinstall2020-08-08_11-46-53AM/logs/netdc_check2020-08-08_11-47-10-AM.log Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /tmp/deinstall2020-08-08_11-46-53AM/logs/asmcadc_check2020-08-08_11-47-10-AM.log Database Check Configuration START Database de-configuration trace file location: /tmp/deinstall2020-08-08_11-46-53AM/logs/databasedc_check2020-08-08_11-47-10-AM.log Database Check Configuration END ######################### DECONFIG CHECK OPERATION END ######################### ####################### DECONFIG CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: The following nodes are part of this cluster: ocmnode1 The cluster node(s) on which the Oracle home deinstallation will be performed are:ocmnode1 Oracle Home selected for deinstall is: /u01/app/12.1.0/grid Inventory Location where the Oracle home registered is: /u01/app/oraInventory Option -local will not modify any ASM configuration. Do you want to continue (y - yes, n - no)? [n]: y A log of this session will be written to: '/tmp/deinstall2020-08-08_11-46-53AM/logs/deinstall_deconfig2020-08-08_11-47-09-AM.out' Any error messages from this session will be written to: '/tmp/deinstall2020-08-08_11-46-53AM/logs/deinstall_deconfig2020-08-08_11-47-09-AM.err' ######################## DECONFIG CLEAN OPERATION START ######################## Database de-configuration trace file location: /tmp/deinstall2020-08-08_11-46-53AM/logs/databasedc_clean2020-08-08_11-47-21-AM.log ASM de-configuration trace file location: /tmp/deinstall2020-08-08_11-46-53AM/logs/asmcadc_clean2020-08-08_11-47-21-AM.log ASM Clean Configuration END Network Configuration clean config START Network de-configuration trace file location: /tmp/deinstall2020-08-08_11-46-53AM/logs/netdc_clean2020-08-08_11-47-21-AM.log Network Configuration clean config END ######################### DECONFIG CLEAN OPERATION END ######################### ####################### DECONFIG CLEAN OPERATION SUMMARY ####################### Oracle Clusterware was already stopped and de-configured on node "ocmnode1" Oracle Clusterware is stopped and de-configured successfully. ####################################################################### ############# ORACLE DECONFIG TOOL END ############# Using properties file /tmp/deinstall2020-08-08_11-46-53AM/response/deinstall_2020-08-08_11-47-09-AM.rsp Location of logs /tmp/deinstall2020-08-08_11-46-53AM/logs/ ############ ORACLE DEINSTALL TOOL START ############ ####################### DEINSTALL CHECK OPERATION SUMMARY ####################### A log of this session will be written to: '/tmp/deinstall2020-08-08_11-46-53AM/logs/deinstall_deconfig2020-08-08_11-47-09-AM.out' Any error messages from this session will be written to: '/tmp/deinstall2020-08-08_11-46-53AM/logs/deinstall_deconfig2020-08-08_11-47-09-AM.err' ######################## DEINSTALL CLEAN OPERATION START ######################## ## [START] Preparing for Deinstall ## Setting LOCAL_NODE to ocmnode1 Setting CLUSTER_NODES to ocmnode1 Setting CRS_HOME to false Setting oracle.installer.invPtrLoc to /tmp/deinstall2020-08-08_11-46-53AM/oraInst.loc Setting oracle.installer.local to true ## [END] Preparing for Deinstall ## ..... Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2020-08-08_11-46-53AM' on node 'ocmnode1' ## [END] Oracle install clean ## ######################### DEINSTALL CLEAN OPERATION END ######################### ####################### DEINSTALL CLEAN OPERATION SUMMARY ####################### Successfully detached Oracle home '/u01/app/12.1.0/grid' from the central inventory on the local node. Failed to delete directory '/u01/app/12.1.0/grid' on the local node. Successfully deleted directory '/u01/app/oraInventory' on the local node. Oracle Universal Installer cleanup was successful. Run 'rm -r /etc/oraInst.loc' as root on node(s) 'ocmnode1' at the end of the session. Run 'rm -r /opt/ORCLfmap' as root on node(s) 'ocmnode1' at the end of the session. Run 'rm -r /etc/oratab' as root on node(s) 'ocmnode1' at the end of the session. Oracle deinstall tool successfully cleaned up temporary directories. ####################################################################### ############# ORACLE DEINSTALL TOOL END #############
[root@ocmnode1 ~]# rm -r /etc/oraInst.loc [root@ocmnode1 ~]# rm -rf /opt/ORCLfmap [root@ocmnode1 ~]# rm -rf /etc/oratab
Update Inventory of Oracle Home for all others node:
[grid@ocmnode2 bin]$ pwd /u01/app/12.1.0/grid/oui/bin [grid@ocmnode2 bin]$ export ORACLE_HOME=/u01/app/12.1.0/grid/ [grid@ocmnode2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={ocmnode2}" CRS=TRUE Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4019 MB Passed The inventory pointer is located at /etc/oraInst.loc 'UpdateNodeList' was successful.
Validation after deletion of Node.
Check node list on Cluster
[grid@ocmdrnode1 bin]$ olsnodes -n ocmdrnode1 1 ocmdrnode2 2 [grid@ocmdrnode1 bin]$ olsnodes -a -t -s -n ocmdrnode1 1 Active Hub Unpinned [grid@ocmdrnode1 bin]$ olsnodes -t -s -n ocmdrnode1 1 Active Unpinned ocmdrnode2 2 Inactive Unpinned
Need to delete node using below command from workable node.
[root@ocmdrnode1 ~]# crsctl delete node -n ocmdrnode2 CRS-4661: Node ocmdrnode2 successfully deleted. [grid@ocmdrnode1 addnode]$ olsnodes -n ocmdrnode1 1 [grid@ocmdrnode1 addnode]$ olsnodes -t -s -n ocmdrnode1 1 Active Unpinned
[grid@ocmnode2 bin]$ pwd /u01/app/12.1.0/grid/bin [grid@ocmnode2 bin]$ ./cluvfy stage -post nodedel -n ocmnode1 Performing post-checks for node removal Checking CRS integrity... CRS integrity check passed Clusterware version consistency passed. Node removal check passed Post-check for node removal was successful.