Adding a Node to Oracle RAC 12c

Complete below list of tasks:

DBA may need to add a new node or existing node (which got deleted from cluster) into Clusterware.

List of tasks for New Node:

  • Install same version of OS
  • Setup environment variable
  • Install all required package including Oracleasm
  • Configure Networking (IP and others)
  • Check Time Synchronization 
  • Change Recommended Kernel Parameters for Oracle
  • Create Group and User (make sure these are same with existing Nodes)
  • Setup SSH configuration for grid user or (user which was used for GRID installation) between new node and other nodes.

If it is existing node, make sure cleanup properly. You may go through: how to delete a node from cluster.

Compare setting between new nodes with existing any node by running below command and fix these if any discrimination is there.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 bin]$ pwd
/u01/app/12.1.0/grid/bin
[grid@ocmnode2 bin]$ ./cluvfy comp peer -n ocmnode1 -refnode ocmnode2 -r 12.1
[grid@ocmnode2 bin]$ pwd /u01/app/12.1.0/grid/bin [grid@ocmnode2 bin]$ ./cluvfy comp peer -n ocmnode1 -refnode ocmnode2 -r 12.1
[grid@ocmnode2 bin]$ pwd
/u01/app/12.1.0/grid/bin
[grid@ocmnode2 bin]$ ./cluvfy comp peer -n ocmnode1 -refnode ocmnode2 -r 12.1

Click on below line to view details:

./cluvfy comp peer -n ocmnode1 -refnode ocmnode2 -r 12.1
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 bin]$ ./cluvfy comp peer -n ocmnode1 -refnode ocmnode2 -r 12.1
Verifying peer compatibility
Checking peer compatibility...
Compatibility check: Physical memory [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 3.7411GB (3922860.0KB) 3.8613GB (4048820.0KB) mismatched
Physical memory <null>
Compatibility check: Available memory [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 3.4843GB (3653536.0KB) 2.316GB (2428544.0KB) matched
Available memory <null>
Compatibility check: Swap space [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 3.9375GB (4128760.0KB) 3.9375GB (4128764.0KB) mismatched
Swap space <null>
Compatibility check: Free disk space for "/usr" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 23.6152GB (2.4762368E7KB) 19.5928GB (2.0544512E7KB) matched
Free disk space <null>
Compatibility check: Free disk space for "/var" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 23.6152GB (2.4762368E7KB) 19.5928GB (2.0544512E7KB) matched
Free disk space <null>
Compatibility check: Free disk space for "/etc" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 23.6152GB (2.4762368E7KB) 19.5928GB (2.0544512E7KB) matched
Free disk space <null>
Compatibility check: Free disk space for "/u01/app/12.1.0/grid" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 23.6152GB (2.4762368E7KB) 19.5928GB (2.0544512E7KB) matched
Free disk space <null>
Compatibility check: Free disk space for "/sbin" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 23.6152GB (2.4762368E7KB) 19.5928GB (2.0544512E7KB) matched
Free disk space <null>
Compatibility check: Free disk space for "/tmp" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 23.6152GB (2.4762368E7KB) 19.5928GB (2.0544512E7KB) matched
Free disk space <null>
Compatibility check: User existence for "grid" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 grid(54321) grid(54321) matched
User existence for "grid" check passed
Compatibility check: Group existence for "oinstall" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 oinstall(54321) oinstall(54321) matched
Group existence for "oinstall" check passed
Compatibility check: Group existence for "dba" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 dba(54322) dba(54322) matched
Group existence for "dba" check passed
Compatibility check: Group membership for "grid" in "oinstall (Primary)" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 yes yes matched
Group membership for "grid" in "oinstall (Primary)" check passed
Compatibility check: Group membership for "grid" in "dba" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 no no matched
Group membership for "grid" in "dba" check passed
Compatibility check: Run level [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 5 5 matched
Run level check passed
Compatibility check: System architecture [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 x86_64 x86_64 matched
System architecture check passed
Compatibility check: Kernel version [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 2.6.32-431.el6.x86_64 3.8.13-16.2.1.el6uek.x86_64 mismatched
Kernel version check failed
Compatibility check: Kernel param "semmsl" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 250 250 matched
Kernel param "semmsl" check passed
Compatibility check: Kernel param "semmns" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 32000 32000 matched
Kernel param "semmns" check passed
Compatibility check: Kernel param "semopm" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 100 100 matched
Kernel param "semopm" check passed
Compatibility check: Kernel param "semmni" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 128 128 matched
Kernel param "semmni" check passed
Compatibility check: Kernel param "shmmax" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 68719476736 68719476736 matched
Kernel param "shmmax" check passed
Compatibility check: Kernel param "shmmni" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 4096 4096 matched
Kernel param "shmmni" check passed
Compatibility check: Kernel param "shmall" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 0 0 matched
Kernel param "shmall" check passed
Compatibility check: Kernel param "file-max" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 6815744 6815744 matched
Kernel param "file-max" check passed
Compatibility check: Kernel param "ip_local_port_range" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 9000 65500 9000 65500 matched
Kernel param "ip_local_port_range" check passed
Compatibility check: Kernel param "rmem_default" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 262144 262144 matched
Kernel param "rmem_default" check passed
Compatibility check: Kernel param "rmem_max" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 4194304 4194304 matched
Kernel param "rmem_max" check passed
Compatibility check: Kernel param "wmem_default" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 262144 262144 matched
Kernel param "wmem_default" check passed
Compatibility check: Kernel param "wmem_max" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 1048576 1048576 matched
Kernel param "wmem_max" check passed
Compatibility check: Kernel param "aio-max-nr" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 1048576 1048576 matched
Kernel param "aio-max-nr" check passed
Compatibility check: Kernel param "panic_on_oops" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 1 1 matched
Kernel param "panic_on_oops" check passed
Compatibility check: Package existence for "binutils" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2-5.36.el6 matched
Package existence for "binutils" check passed
Compatibility check: Package existence for "compat-libcap1" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 compat-libcap1-1.10-1 compat-libcap1-1.10-1 matched
Package existence for "compat-libcap1" check passed
Compatibility check: Package existence for "compat-libstdc++-33 (x86_64)" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 compat-libstdc++-33-3.2.3-69.el6 (x86_64) compat-libstdc++-33-3.2.3-69.el6 (x86_64) matched
Package existence for "compat-libstdc++-33 (x86_64)" check passed
Compatibility check: Package existence for "libgcc (x86_64)" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 libgcc-4.4.7-23.0.1.el6 (x86_64) libgcc-4.4.7-23.0.1.el6 (x86_64) matched
Package existence for "libgcc (x86_64)" check passed
Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 libstdc++-4.4.7-23.0.1.el6 (x86_64) libstdc++-4.4.7-23.0.1.el6 (x86_64) matched
Package existence for "libstdc++ (x86_64)" check passed
Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 libstdc++-devel-4.4.7-23.0.1.el6 (x86_64) libstdc++-devel-4.4.7-23.0.1.el6 (x86_64) matched
Package existence for "libstdc++-devel (x86_64)" check passed
Compatibility check: Package existence for "sysstat" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 sysstat-9.0.4-22.el6 sysstat-9.0.4-22.el6 matched
Package existence for "sysstat" check passed
Compatibility check: Package existence for "gcc" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 gcc-4.4.7-23.0.1.el6 gcc-4.4.7-23.0.1.el6 matched
Package existence for "gcc" check passed
Compatibility check: Package existence for "gcc-c++" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 gcc-c++-4.4.7-23.0.1.el6 gcc-c++-4.4.7-23.0.1.el6 matched
Package existence for "gcc-c++" check passed
Compatibility check: Package existence for "ksh" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 missing missing matched
Package existence for "ksh" check passed
Compatibility check: Package existence for "make" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 make-3.81-20.el6 make-3.81-20.el6 matched
Package existence for "make" check passed
Compatibility check: Package existence for "glibc (x86_64)" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 glibc-2.12-1.132.el6 (x86_64) glibc-2.12-1.132.el6 (x86_64) matched
Package existence for "glibc (x86_64)" check passed
Compatibility check: Package existence for "glibc-devel (x86_64)" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 glibc-devel-2.12-1.132.el6 (x86_64) glibc-devel-2.12-1.132.el6 (x86_64) matched
Package existence for "glibc-devel (x86_64)" check passed
Compatibility check: Package existence for "libaio (x86_64)" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 libaio-0.3.107-10.el6 (x86_64) libaio-0.3.107-10.el6 (x86_64) matched
Package existence for "libaio (x86_64)" check passed
Compatibility check: Package existence for "libaio-devel (x86_64)" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 libaio-devel-0.3.107-10.el6 (x86_64) libaio-devel-0.3.107-10.el6 (x86_64) matched
Package existence for "libaio-devel (x86_64)" check passed
Compatibility check: Package existence for "nfs-utils" [reference node: ocmnode2]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
ocmnode1 nfs-utils-1.2.3-39.el6 nfs-utils-1.2.3-39.el6 matched
Package existence for "nfs-utils" check passed
Verification of peer compatibility was successful.
Checks passed for the following node(s):
ocmnode1
[grid@ocmnode2 bin]$ ./cluvfy comp peer -n ocmnode1 -refnode ocmnode2 -r 12.1 Verifying peer compatibility Checking peer compatibility... Compatibility check: Physical memory [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 3.7411GB (3922860.0KB) 3.8613GB (4048820.0KB) mismatched Physical memory <null> Compatibility check: Available memory [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 3.4843GB (3653536.0KB) 2.316GB (2428544.0KB) matched Available memory <null> Compatibility check: Swap space [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 3.9375GB (4128760.0KB) 3.9375GB (4128764.0KB) mismatched Swap space <null> Compatibility check: Free disk space for "/usr" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 23.6152GB (2.4762368E7KB) 19.5928GB (2.0544512E7KB) matched Free disk space <null> Compatibility check: Free disk space for "/var" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 23.6152GB (2.4762368E7KB) 19.5928GB (2.0544512E7KB) matched Free disk space <null> Compatibility check: Free disk space for "/etc" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 23.6152GB (2.4762368E7KB) 19.5928GB (2.0544512E7KB) matched Free disk space <null> Compatibility check: Free disk space for "/u01/app/12.1.0/grid" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 23.6152GB (2.4762368E7KB) 19.5928GB (2.0544512E7KB) matched Free disk space <null> Compatibility check: Free disk space for "/sbin" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 23.6152GB (2.4762368E7KB) 19.5928GB (2.0544512E7KB) matched Free disk space <null> Compatibility check: Free disk space for "/tmp" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 23.6152GB (2.4762368E7KB) 19.5928GB (2.0544512E7KB) matched Free disk space <null> Compatibility check: User existence for "grid" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 grid(54321) grid(54321) matched User existence for "grid" check passed Compatibility check: Group existence for "oinstall" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 oinstall(54321) oinstall(54321) matched Group existence for "oinstall" check passed Compatibility check: Group existence for "dba" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 dba(54322) dba(54322) matched Group existence for "dba" check passed Compatibility check: Group membership for "grid" in "oinstall (Primary)" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 yes yes matched Group membership for "grid" in "oinstall (Primary)" check passed Compatibility check: Group membership for "grid" in "dba" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 no no matched Group membership for "grid" in "dba" check passed Compatibility check: Run level [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 5 5 matched Run level check passed Compatibility check: System architecture [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 x86_64 x86_64 matched System architecture check passed Compatibility check: Kernel version [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 2.6.32-431.el6.x86_64 3.8.13-16.2.1.el6uek.x86_64 mismatched Kernel version check failed Compatibility check: Kernel param "semmsl" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 250 250 matched Kernel param "semmsl" check passed Compatibility check: Kernel param "semmns" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 32000 32000 matched Kernel param "semmns" check passed Compatibility check: Kernel param "semopm" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 100 100 matched Kernel param "semopm" check passed Compatibility check: Kernel param "semmni" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 128 128 matched Kernel param "semmni" check passed Compatibility check: Kernel param "shmmax" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 68719476736 68719476736 matched Kernel param "shmmax" check passed Compatibility check: Kernel param "shmmni" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 4096 4096 matched Kernel param "shmmni" check passed Compatibility check: Kernel param "shmall" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 0 0 matched Kernel param "shmall" check passed Compatibility check: Kernel param "file-max" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 6815744 6815744 matched Kernel param "file-max" check passed Compatibility check: Kernel param "ip_local_port_range" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 9000 65500 9000 65500 matched Kernel param "ip_local_port_range" check passed Compatibility check: Kernel param "rmem_default" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 262144 262144 matched Kernel param "rmem_default" check passed Compatibility check: Kernel param "rmem_max" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 4194304 4194304 matched Kernel param "rmem_max" check passed Compatibility check: Kernel param "wmem_default" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 262144 262144 matched Kernel param "wmem_default" check passed Compatibility check: Kernel param "wmem_max" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 1048576 1048576 matched Kernel param "wmem_max" check passed Compatibility check: Kernel param "aio-max-nr" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 1048576 1048576 matched Kernel param "aio-max-nr" check passed Compatibility check: Kernel param "panic_on_oops" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 1 1 matched Kernel param "panic_on_oops" check passed Compatibility check: Package existence for "binutils" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2-5.36.el6 matched Package existence for "binutils" check passed Compatibility check: Package existence for "compat-libcap1" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 compat-libcap1-1.10-1 compat-libcap1-1.10-1 matched Package existence for "compat-libcap1" check passed Compatibility check: Package existence for "compat-libstdc++-33 (x86_64)" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 compat-libstdc++-33-3.2.3-69.el6 (x86_64) compat-libstdc++-33-3.2.3-69.el6 (x86_64) matched Package existence for "compat-libstdc++-33 (x86_64)" check passed Compatibility check: Package existence for "libgcc (x86_64)" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 libgcc-4.4.7-23.0.1.el6 (x86_64) libgcc-4.4.7-23.0.1.el6 (x86_64) matched Package existence for "libgcc (x86_64)" check passed Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 libstdc++-4.4.7-23.0.1.el6 (x86_64) libstdc++-4.4.7-23.0.1.el6 (x86_64) matched Package existence for "libstdc++ (x86_64)" check passed Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 libstdc++-devel-4.4.7-23.0.1.el6 (x86_64) libstdc++-devel-4.4.7-23.0.1.el6 (x86_64) matched Package existence for "libstdc++-devel (x86_64)" check passed Compatibility check: Package existence for "sysstat" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 sysstat-9.0.4-22.el6 sysstat-9.0.4-22.el6 matched Package existence for "sysstat" check passed Compatibility check: Package existence for "gcc" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 gcc-4.4.7-23.0.1.el6 gcc-4.4.7-23.0.1.el6 matched Package existence for "gcc" check passed Compatibility check: Package existence for "gcc-c++" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 gcc-c++-4.4.7-23.0.1.el6 gcc-c++-4.4.7-23.0.1.el6 matched Package existence for "gcc-c++" check passed Compatibility check: Package existence for "ksh" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 missing missing matched Package existence for "ksh" check passed Compatibility check: Package existence for "make" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 make-3.81-20.el6 make-3.81-20.el6 matched Package existence for "make" check passed Compatibility check: Package existence for "glibc (x86_64)" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 glibc-2.12-1.132.el6 (x86_64) glibc-2.12-1.132.el6 (x86_64) matched Package existence for "glibc (x86_64)" check passed Compatibility check: Package existence for "glibc-devel (x86_64)" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 glibc-devel-2.12-1.132.el6 (x86_64) glibc-devel-2.12-1.132.el6 (x86_64) matched Package existence for "glibc-devel (x86_64)" check passed Compatibility check: Package existence for "libaio (x86_64)" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 libaio-0.3.107-10.el6 (x86_64) libaio-0.3.107-10.el6 (x86_64) matched Package existence for "libaio (x86_64)" check passed Compatibility check: Package existence for "libaio-devel (x86_64)" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 libaio-devel-0.3.107-10.el6 (x86_64) libaio-devel-0.3.107-10.el6 (x86_64) matched Package existence for "libaio-devel (x86_64)" check passed Compatibility check: Package existence for "nfs-utils" [reference node: ocmnode2] Node Name Status Ref. node status Comment ------------ ------------------------ ------------------------ ---------- ocmnode1 nfs-utils-1.2.3-39.el6 nfs-utils-1.2.3-39.el6 matched Package existence for "nfs-utils" check passed Verification of peer compatibility was successful. Checks passed for the following node(s): ocmnode1
[grid@ocmnode2 bin]$ ./cluvfy comp peer -n ocmnode1 -refnode ocmnode2 -r 12.1

Verifying peer compatibility

Checking peer compatibility...

Compatibility check: Physical memory [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      3.7411GB (3922860.0KB)    3.8613GB (4048820.0KB)    mismatched
Physical memory <null>

Compatibility check: Available memory [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      3.4843GB (3653536.0KB)    2.316GB (2428544.0KB)     matched
Available memory <null>

Compatibility check: Swap space [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      3.9375GB (4128760.0KB)    3.9375GB (4128764.0KB)    mismatched
Swap space <null>

Compatibility check: Free disk space for "/usr" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      23.6152GB (2.4762368E7KB)  19.5928GB (2.0544512E7KB)  matched
Free disk space <null>

Compatibility check: Free disk space for "/var" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      23.6152GB (2.4762368E7KB)  19.5928GB (2.0544512E7KB)  matched
Free disk space <null>

Compatibility check: Free disk space for "/etc" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      23.6152GB (2.4762368E7KB)  19.5928GB (2.0544512E7KB)  matched
Free disk space <null>

Compatibility check: Free disk space for "/u01/app/12.1.0/grid" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      23.6152GB (2.4762368E7KB)  19.5928GB (2.0544512E7KB)  matched
Free disk space <null>

Compatibility check: Free disk space for "/sbin" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      23.6152GB (2.4762368E7KB)  19.5928GB (2.0544512E7KB)  matched
Free disk space <null>

Compatibility check: Free disk space for "/tmp" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      23.6152GB (2.4762368E7KB)  19.5928GB (2.0544512E7KB)  matched
Free disk space <null>

Compatibility check: User existence for "grid" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      grid(54321)               grid(54321)               matched
User existence for "grid" check passed

Compatibility check: Group existence for "oinstall" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      oinstall(54321)           oinstall(54321)           matched
Group existence for "oinstall" check passed

Compatibility check: Group existence for "dba" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      dba(54322)                dba(54322)                matched
Group existence for "dba" check passed

Compatibility check: Group membership for "grid" in "oinstall (Primary)" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      yes                       yes                       matched
Group membership for "grid" in "oinstall (Primary)" check passed

Compatibility check: Group membership for "grid" in "dba" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      no                        no                        matched
Group membership for "grid" in "dba" check passed

Compatibility check: Run level [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      5                         5                         matched
Run level check passed

Compatibility check: System architecture [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      x86_64                    x86_64                    matched
System architecture check passed

Compatibility check: Kernel version [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      2.6.32-431.el6.x86_64     3.8.13-16.2.1.el6uek.x86_64  mismatched
Kernel version check failed

Compatibility check: Kernel param "semmsl" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      250                       250                       matched
Kernel param "semmsl" check passed

Compatibility check: Kernel param "semmns" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      32000                     32000                     matched
Kernel param "semmns" check passed

Compatibility check: Kernel param "semopm" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      100                       100                       matched
Kernel param "semopm" check passed

Compatibility check: Kernel param "semmni" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      128                       128                       matched
Kernel param "semmni" check passed

Compatibility check: Kernel param "shmmax" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      68719476736               68719476736               matched
Kernel param "shmmax" check passed

Compatibility check: Kernel param "shmmni" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      4096                      4096                      matched
Kernel param "shmmni" check passed

Compatibility check: Kernel param "shmall" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      0                         0                         matched
Kernel param "shmall" check passed

Compatibility check: Kernel param "file-max" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      6815744                   6815744                   matched
Kernel param "file-max" check passed

Compatibility check: Kernel param "ip_local_port_range" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      9000 65500                9000 65500                matched
Kernel param "ip_local_port_range" check passed

Compatibility check: Kernel param "rmem_default" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      262144                    262144                    matched
Kernel param "rmem_default" check passed

Compatibility check: Kernel param "rmem_max" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      4194304                   4194304                   matched
Kernel param "rmem_max" check passed

Compatibility check: Kernel param "wmem_default" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      262144                    262144                    matched
Kernel param "wmem_default" check passed

Compatibility check: Kernel param "wmem_max" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      1048576                   1048576                   matched
Kernel param "wmem_max" check passed

Compatibility check: Kernel param "aio-max-nr" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      1048576                   1048576                   matched
Kernel param "aio-max-nr" check passed

Compatibility check: Kernel param "panic_on_oops" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      1                         1                         matched
Kernel param "panic_on_oops" check passed

Compatibility check: Package existence for "binutils" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2-5.36.el6  matched
Package existence for "binutils" check passed

Compatibility check: Package existence for "compat-libcap1" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      compat-libcap1-1.10-1     compat-libcap1-1.10-1     matched
Package existence for "compat-libcap1" check passed

Compatibility check: Package existence for "compat-libstdc++-33 (x86_64)" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      compat-libstdc++-33-3.2.3-69.el6 (x86_64)  compat-libstdc++-33-3.2.3-69.el6 (x86_64)  matched
Package existence for "compat-libstdc++-33 (x86_64)" check passed

Compatibility check: Package existence for "libgcc (x86_64)" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      libgcc-4.4.7-23.0.1.el6 (x86_64)  libgcc-4.4.7-23.0.1.el6 (x86_64)  matched
Package existence for "libgcc (x86_64)" check passed

Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      libstdc++-4.4.7-23.0.1.el6 (x86_64)  libstdc++-4.4.7-23.0.1.el6 (x86_64)  matched
Package existence for "libstdc++ (x86_64)" check passed

Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      libstdc++-devel-4.4.7-23.0.1.el6 (x86_64)  libstdc++-devel-4.4.7-23.0.1.el6 (x86_64)  matched
Package existence for "libstdc++-devel (x86_64)" check passed

Compatibility check: Package existence for "sysstat" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      sysstat-9.0.4-22.el6      sysstat-9.0.4-22.el6      matched
Package existence for "sysstat" check passed

Compatibility check: Package existence for "gcc" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      gcc-4.4.7-23.0.1.el6      gcc-4.4.7-23.0.1.el6      matched
Package existence for "gcc" check passed

Compatibility check: Package existence for "gcc-c++" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      gcc-c++-4.4.7-23.0.1.el6  gcc-c++-4.4.7-23.0.1.el6  matched
Package existence for "gcc-c++" check passed

Compatibility check: Package existence for "ksh" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      missing                   missing                   matched
Package existence for "ksh" check passed

Compatibility check: Package existence for "make" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      make-3.81-20.el6          make-3.81-20.el6          matched
Package existence for "make" check passed

Compatibility check: Package existence for "glibc (x86_64)" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      glibc-2.12-1.132.el6 (x86_64)  glibc-2.12-1.132.el6 (x86_64)  matched
Package existence for "glibc (x86_64)" check passed

Compatibility check: Package existence for "glibc-devel (x86_64)" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      glibc-devel-2.12-1.132.el6 (x86_64)  glibc-devel-2.12-1.132.el6 (x86_64)  matched
Package existence for "glibc-devel (x86_64)" check passed

Compatibility check: Package existence for "libaio (x86_64)" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      libaio-0.3.107-10.el6 (x86_64)  libaio-0.3.107-10.el6 (x86_64)  matched
Package existence for "libaio (x86_64)" check passed

Compatibility check: Package existence for "libaio-devel (x86_64)" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      libaio-devel-0.3.107-10.el6 (x86_64)  libaio-devel-0.3.107-10.el6 (x86_64)  matched
Package existence for "libaio-devel (x86_64)" check passed

Compatibility check: Package existence for "nfs-utils" [reference node: ocmnode2]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  ocmnode1      nfs-utils-1.2.3-39.el6    nfs-utils-1.2.3-39.el6    matched
Package existence for "nfs-utils" check passed

Verification of peer compatibility was successful.
Checks passed for the following node(s):
        ocmnode1

Run below command to find the prerequisites status on New node or existing problematic node.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 bin]$ pwd
/u01/app/12.1.0/grid/bin
[grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1
[grid@ocmnode2 bin]$ pwd /u01/app/12.1.0/grid/bin [grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1
[grid@ocmnode2 bin]$ pwd
/u01/app/12.1.0/grid/bin
[grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1

Click on below line to view details:

[grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "ocmnode2"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.56.0"
Node connectivity passed for subnet "192.168.56.0" with node(s) ocmnode1
TCP connectivity check passed for subnet "192.168.56.0"
Check: Node connectivity using interfaces on subnet "192.168.10.0"
Node connectivity passed for subnet "192.168.10.0" with node(s) ocmnode1
TCP connectivity check passed for subnet "192.168.10.0"
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check failed
Check failed on nodes:
ocmnode1
Available memory check passed
Swap space check passed
Free disk space check passed for "ocmnode1:/usr,ocmnode1:/var,ocmnode1:/etc,ocmnode1:/u01/app/12.1.0/grid,ocmnode1:/sbin,ocmnode1:/tmp"
Check for multiple users with UID value 54321 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" failed
Check failed on nodes:
ocmnode1
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check failed
Check failed on nodes:
ocmnode1
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Kernel parameter check passed for "panic_on_oops"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check failed for "ksh"
Check failed on nodes:
ocmnode1
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "nfs-utils"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP configuration file "/etc/ntp.conf" existence check passed
No NTP Daemons or Services were found to be running
PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
ocmnode1
Clock synchronization check using Network Time Protocol(NTP) failed
Core file name pattern consistency check passed.
User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes
WARNING:
PRVF-5640 : Both 'search' and 'domain' entries are present in file "/etc/resolv.conf" on the following nodes: ocmnode1
Check for integrity of file "/etc/resolv.conf" passed
Time zone consistency check passed
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking daemon "avahi-daemon" is not configured and running
Daemon not configured check passed for process "avahi-daemon"
Daemon not running check passed for process "avahi-daemon"
Starting check for Reverse path filter setting ...
Check for Reverse path filter setting passed
Starting check for Network interface bonding status of private interconnect network interfaces ...
Check for Network interface bonding status of private interconnect network interfaces passed
Starting check for /dev/shm mounted as temporary file system ...
Check for /dev/shm mounted as temporary file system passed
Starting check for /boot mount ...
Check for /boot mount passed
Starting check for zeroconf check ...
Check for zeroconf check passed
Pre-check for cluster services setup was unsuccessful on all the nodes.
[grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1 Performing pre-checks for cluster services setup Checking node reachability... Node reachability check passed from node "ocmnode2" Checking user equivalence... User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity using interfaces on subnet "192.168.56.0" Node connectivity passed for subnet "192.168.56.0" with node(s) ocmnode1 TCP connectivity check passed for subnet "192.168.56.0" Check: Node connectivity using interfaces on subnet "192.168.10.0" Node connectivity passed for subnet "192.168.10.0" with node(s) ocmnode1 TCP connectivity check passed for subnet "192.168.10.0" Node connectivity check passed Checking multicast communication... Checking subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251"... Check of subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251" passed. Check of multicast communication passed. Checking ASMLib configuration. Check for ASMLib configuration passed. Total memory check failed Check failed on nodes: ocmnode1 Available memory check passed Swap space check passed Free disk space check passed for "ocmnode1:/usr,ocmnode1:/var,ocmnode1:/etc,ocmnode1:/u01/app/12.1.0/grid,ocmnode1:/sbin,ocmnode1:/tmp" Check for multiple users with UID value 54321 passed User existence check passed for "grid" Group existence check passed for "oinstall" Group existence check passed for "dba" Membership check for user "grid" in group "oinstall" [as Primary] passed Membership check for user "grid" in group "dba" failed Check failed on nodes: ocmnode1 Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check failed Check failed on nodes: ocmnode1 Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Kernel parameter check passed for "panic_on_oops" Package existence check passed for "binutils" Package existence check passed for "compat-libcap1" Package existence check passed for "compat-libstdc++-33(x86_64)" Package existence check passed for "libgcc(x86_64)" Package existence check passed for "libstdc++(x86_64)" Package existence check passed for "libstdc++-devel(x86_64)" Package existence check passed for "sysstat" Package existence check passed for "gcc" Package existence check passed for "gcc-c++" Package existence check failed for "ksh" Check failed on nodes: ocmnode1 Package existence check passed for "make" Package existence check passed for "glibc(x86_64)" Package existence check passed for "glibc-devel(x86_64)" Package existence check passed for "libaio(x86_64)" Package existence check passed for "libaio-devel(x86_64)" Package existence check passed for "nfs-utils" Check for multiple users with UID value 0 passed Current group ID check passed Starting check for consistency of primary group of root user Check for consistency of root user's primary group passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP configuration file "/etc/ntp.conf" existence check passed No NTP Daemons or Services were found to be running PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s): ocmnode1 Clock synchronization check using Network Time Protocol(NTP) failed Core file name pattern consistency check passed. User "grid" is not part of "root" group. Check passed Default user file creation mask check passed Checking integrity of file "/etc/resolv.conf" across nodes WARNING: PRVF-5640 : Both 'search' and 'domain' entries are present in file "/etc/resolv.conf" on the following nodes: ocmnode1 Check for integrity of file "/etc/resolv.conf" passed Time zone consistency check passed Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ... Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed Checking daemon "avahi-daemon" is not configured and running Daemon not configured check passed for process "avahi-daemon" Daemon not running check passed for process "avahi-daemon" Starting check for Reverse path filter setting ... Check for Reverse path filter setting passed Starting check for Network interface bonding status of private interconnect network interfaces ... Check for Network interface bonding status of private interconnect network interfaces passed Starting check for /dev/shm mounted as temporary file system ... Check for /dev/shm mounted as temporary file system passed Starting check for /boot mount ... Check for /boot mount passed Starting check for zeroconf check ... Check for zeroconf check passed Pre-check for cluster services setup was unsuccessful on all the nodes.
[grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "ocmnode2"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.56.0"
Node connectivity passed for subnet "192.168.56.0" with node(s) ocmnode1
TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity using interfaces on subnet "192.168.10.0"
Node connectivity passed for subnet "192.168.10.0" with node(s) ocmnode1
TCP connectivity check passed for subnet "192.168.10.0"


Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.

Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check failed
Check failed on nodes:
        ocmnode1
Available memory check passed
Swap space check passed
Free disk space check passed for "ocmnode1:/usr,ocmnode1:/var,ocmnode1:/etc,ocmnode1:/u01/app/12.1.0/grid,ocmnode1:/sbin,ocmnode1:/tmp"
Check for multiple users with UID value 54321 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" failed
Check failed on nodes:
        ocmnode1
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check failed
Check failed on nodes:
        ocmnode1
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Kernel parameter check passed for "panic_on_oops"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check failed for "ksh"
Check failed on nodes:
        ocmnode1
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "nfs-utils"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP configuration file "/etc/ntp.conf" existence check passed
No NTP Daemons or Services were found to be running
PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
ocmnode1
Clock synchronization check using Network Time Protocol(NTP) failed

Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes


WARNING:
PRVF-5640 : Both 'search' and 'domain' entries are present in file "/etc/resolv.conf" on the following nodes: ocmnode1

Check for integrity of file "/etc/resolv.conf" passed

Time zone consistency check passed

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Checking daemon "avahi-daemon" is not configured and running
Daemon not configured check passed for process "avahi-daemon"
Daemon not running check passed for process "avahi-daemon"

Starting check for Reverse path filter setting ...

Check for Reverse path filter setting passed

Starting check for Network interface bonding status of private interconnect network interfaces ...

Check for Network interface bonding status of private interconnect network interfaces passed

Starting check for /dev/shm mounted as temporary file system ...

Check for /dev/shm mounted as temporary file system passed

Starting check for /boot mount ...

Check for /boot mount passed

Starting check for zeroconf check ...

Check for zeroconf check passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

Below command will generate fix-up script for prerequisites.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 bin]$ pwd
/u01/app/12.1.0/grid/bin
[grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1 -fixup
[grid@ocmnode2 bin]$ pwd /u01/app/12.1.0/grid/bin [grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1 -fixup
[grid@ocmnode2 bin]$ pwd
/u01/app/12.1.0/grid/bin
[grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1 -fixup
[grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1 -fixup
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 .ssh]$ cd /u01/app/12.1.0/grid/bin
[grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1 -fixup
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "ocmnode2"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.10.0"
Node connectivity passed for subnet "192.168.10.0" with node(s) ocmnode1
TCP connectivity check passed for subnet "192.168.10.0"
Check: Node connectivity using interfaces on subnet "192.168.56.0"
Node connectivity passed for subnet "192.168.56.0" with node(s) ocmnode1
TCP connectivity check passed for subnet "192.168.56.0"
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.10.0" for multicast communication with multicast group "224.0. 0.251"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "224.0. 0.251" passed.
Check of multicast communication passed.
Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check failed
Check failed on nodes:
ocmnode1
Available memory check passed
Swap space check passed
Free disk space check passed for "ocmnode1:/usr,ocmnode1:/var,ocmnode1:/etc,ocmnode1:/u01/app/12.1.0/grid,ocmnode1:/sbin,ocmnode1:/tmp"
Check for multiple users with UID value 54321 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" failed
Check failed on nodes:
ocmnode1
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
PRVG-1206 : Check cannot be performed for configured value of kernel parameter "aio-max-nr" on node "ocmnode1"
Kernel parameter check passed for "aio-max-nr"
PRVG-1206 : Check cannot be performed for configured value of kernel parameter "panic_on_oops" on node "ocmnode1"
Kernel parameter check passed for "panic_on_oops"
Package existence check passed for "binutils"
Package existence check failed for "compat-libcap1"
Check failed on nodes:
ocmnode1
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check failed for "ksh"
Check failed on nodes:
ocmnode1
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check failed for "libaio-devel(x86_64)"
Check failed on nodes:
ocmnode1
Package existence check passed for "nfs-utils"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP configuration file "/etc/ntp.conf" existence check passed
No NTP Daemons or Services were found to be running
PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
ocmnode1
Clock synchronization check using Network Time Protocol(NTP) failed
Core file name pattern consistency check passed.
User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes
WARNING:
PRVF-5640 : Both 'search' and 'domain' entries are present in file "/etc/resolv.conf" on the following nodes: ocmnode1
Check for integrity of file "/etc/resolv.conf" passed
Time zone consistency check passed
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking daemon "avahi-daemon" is not configured and running
Daemon not configured check passed for process "avahi-daemon"
Daemon not running check passed for process "avahi-daemon"
Starting check for Reverse path filter setting ...
Check for Reverse path filter setting passed
Starting check for Network interface bonding status of private interconnect network interfaces ...
Check for Network interface bonding status of private interconnect network interfaces passed
Starting check for /dev/shm mounted as temporary file system ...
ERROR:
PRVE-0426 : The size of in-memory file system mounted as /dev/shm is "1002" megabytes which is less than the required size of "2048" megabytes on node ""
Check for /dev/shm mounted as temporary file system failed
Starting check for /boot mount ...
Check for /boot mount passed
Starting check for zeroconf check ...
ERROR:
PRVE-10077 : NOZEROCONF parameter was not specified or was not set to 'yes' in file "/etc/sysconfig/network" on node "ocmnode1.localdomain"
Check for zeroconf check failed
Pre-check for cluster services setup was unsuccessful on all the nodes.
******************************************************************************************
Following is the list of fixable prerequisites selected to fix in this session
******************************************************************************************
-------------- --------------- ----------------
Check failed. Failed on nodes Reboot required?
-------------- --------------- ----------------
zeroconf check ocmnode1 no
Group Membership: dba ocmnode1 no
OS Kernel Parameter: ocmnode1 no
aio-max-nr
OS Kernel Parameter: ocmnode1 no
panic_on_oops
Execute "/tmp/CVU_12.1.0.2.0_grid/runfixup.sh" as root user on nodes "ocmnode1" to perform the fix up operations manually
Press ENTER key to continue after execution of "/tmp/CVU_12.1.0.2.0_grid/runfixup.sh" has completed on nodes "ocmnode1"
Fix: zeroconf check
"zeroconf check" was successfully fixed on all the applicable nodes
Fix: Group Membership: dba
"Group Membership: dba" was successfully fixed on all the applicable nodes
Fix: OS Kernel Parameter: aio-max-nr
"OS Kernel Parameter: aio-max-nr" was successfully fixed on all the applicable nodes
Fix: OS Kernel Parameter: panic_on_oops
"OS Kernel Parameter: panic_on_oops" was successfully fixed on all the applicable nodes
[grid@ocmnode2 .ssh]$ cd /u01/app/12.1.0/grid/bin [grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1 -fixup Performing pre-checks for cluster services setup Checking node reachability... Node reachability check passed from node "ocmnode2" Checking user equivalence... User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity using interfaces on subnet "192.168.10.0" Node connectivity passed for subnet "192.168.10.0" with node(s) ocmnode1 TCP connectivity check passed for subnet "192.168.10.0" Check: Node connectivity using interfaces on subnet "192.168.56.0" Node connectivity passed for subnet "192.168.56.0" with node(s) ocmnode1 TCP connectivity check passed for subnet "192.168.56.0" Node connectivity check passed Checking multicast communication... Checking subnet "192.168.10.0" for multicast communication with multicast group "224.0. 0.251"... Check of subnet "192.168.10.0" for multicast communication with multicast group "224.0. 0.251" passed. Check of multicast communication passed. Checking ASMLib configuration. Check for ASMLib configuration passed. Total memory check failed Check failed on nodes: ocmnode1 Available memory check passed Swap space check passed Free disk space check passed for "ocmnode1:/usr,ocmnode1:/var,ocmnode1:/etc,ocmnode1:/u01/app/12.1.0/grid,ocmnode1:/sbin,ocmnode1:/tmp" Check for multiple users with UID value 54321 passed User existence check passed for "grid" Group existence check passed for "oinstall" Group existence check passed for "dba" Membership check for user "grid" in group "oinstall" [as Primary] passed Membership check for user "grid" in group "dba" failed Check failed on nodes: ocmnode1 Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" PRVG-1206 : Check cannot be performed for configured value of kernel parameter "aio-max-nr" on node "ocmnode1" Kernel parameter check passed for "aio-max-nr" PRVG-1206 : Check cannot be performed for configured value of kernel parameter "panic_on_oops" on node "ocmnode1" Kernel parameter check passed for "panic_on_oops" Package existence check passed for "binutils" Package existence check failed for "compat-libcap1" Check failed on nodes: ocmnode1 Package existence check passed for "compat-libstdc++-33(x86_64)" Package existence check passed for "libgcc(x86_64)" Package existence check passed for "libstdc++(x86_64)" Package existence check passed for "libstdc++-devel(x86_64)" Package existence check passed for "sysstat" Package existence check passed for "gcc" Package existence check passed for "gcc-c++" Package existence check failed for "ksh" Check failed on nodes: ocmnode1 Package existence check passed for "make" Package existence check passed for "glibc(x86_64)" Package existence check passed for "glibc-devel(x86_64)" Package existence check passed for "libaio(x86_64)" Package existence check failed for "libaio-devel(x86_64)" Check failed on nodes: ocmnode1 Package existence check passed for "nfs-utils" Check for multiple users with UID value 0 passed Current group ID check passed Starting check for consistency of primary group of root user Check for consistency of root user's primary group passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP configuration file "/etc/ntp.conf" existence check passed No NTP Daemons or Services were found to be running PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s): ocmnode1 Clock synchronization check using Network Time Protocol(NTP) failed Core file name pattern consistency check passed. User "grid" is not part of "root" group. Check passed Default user file creation mask check passed Checking integrity of file "/etc/resolv.conf" across nodes WARNING: PRVF-5640 : Both 'search' and 'domain' entries are present in file "/etc/resolv.conf" on the following nodes: ocmnode1 Check for integrity of file "/etc/resolv.conf" passed Time zone consistency check passed Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ... Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed Checking daemon "avahi-daemon" is not configured and running Daemon not configured check passed for process "avahi-daemon" Daemon not running check passed for process "avahi-daemon" Starting check for Reverse path filter setting ... Check for Reverse path filter setting passed Starting check for Network interface bonding status of private interconnect network interfaces ... Check for Network interface bonding status of private interconnect network interfaces passed Starting check for /dev/shm mounted as temporary file system ... ERROR: PRVE-0426 : The size of in-memory file system mounted as /dev/shm is "1002" megabytes which is less than the required size of "2048" megabytes on node "" Check for /dev/shm mounted as temporary file system failed Starting check for /boot mount ... Check for /boot mount passed Starting check for zeroconf check ... ERROR: PRVE-10077 : NOZEROCONF parameter was not specified or was not set to 'yes' in file "/etc/sysconfig/network" on node "ocmnode1.localdomain" Check for zeroconf check failed Pre-check for cluster services setup was unsuccessful on all the nodes. ****************************************************************************************** Following is the list of fixable prerequisites selected to fix in this session ****************************************************************************************** -------------- --------------- ---------------- Check failed. Failed on nodes Reboot required? -------------- --------------- ---------------- zeroconf check ocmnode1 no Group Membership: dba ocmnode1 no OS Kernel Parameter: ocmnode1 no aio-max-nr OS Kernel Parameter: ocmnode1 no panic_on_oops Execute "/tmp/CVU_12.1.0.2.0_grid/runfixup.sh" as root user on nodes "ocmnode1" to perform the fix up operations manually Press ENTER key to continue after execution of "/tmp/CVU_12.1.0.2.0_grid/runfixup.sh" has completed on nodes "ocmnode1" Fix: zeroconf check "zeroconf check" was successfully fixed on all the applicable nodes Fix: Group Membership: dba "Group Membership: dba" was successfully fixed on all the applicable nodes Fix: OS Kernel Parameter: aio-max-nr "OS Kernel Parameter: aio-max-nr" was successfully fixed on all the applicable nodes Fix: OS Kernel Parameter: panic_on_oops "OS Kernel Parameter: panic_on_oops" was successfully fixed on all the applicable nodes
[grid@ocmnode2 .ssh]$ cd /u01/app/12.1.0/grid/bin
[grid@ocmnode2 bin]$ ./cluvfy stage -pre crsinst -n ocmnode1 -fixup

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "ocmnode2"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.10.0"
Node connectivity passed for subnet "192.168.10.0" with node(s) ocmnode1
TCP connectivity check passed for subnet "192.168.10.0"


Check: Node connectivity using interfaces on subnet "192.168.56.0"
Node connectivity passed for subnet "192.168.56.0" with node(s) ocmnode1
TCP connectivity check passed for subnet "192.168.56.0"


Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.10.0" for multicast communication with multicast group "224.0.                                                            0.251"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "224.0.                                                            0.251" passed.

Check of multicast communication passed.

Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check failed
Check failed on nodes:
        ocmnode1
Available memory check passed
Swap space check passed
Free disk space check passed for "ocmnode1:/usr,ocmnode1:/var,ocmnode1:/etc,ocmnode1:/u01/app/12.1.0/grid,ocmnode1:/sbin,ocmnode1:/tmp"
Check for multiple users with UID value 54321 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" failed
Check failed on nodes:
        ocmnode1
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"

PRVG-1206 : Check cannot be performed for configured value of kernel parameter "aio-max-nr" on node "ocmnode1"

Kernel parameter check passed for "aio-max-nr"

PRVG-1206 : Check cannot be performed for configured value of kernel parameter "panic_on_oops" on node "ocmnode1"

Kernel parameter check passed for "panic_on_oops"
Package existence check passed for "binutils"
Package existence check failed for "compat-libcap1"
Check failed on nodes:
        ocmnode1
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check failed for "ksh"
Check failed on nodes:
        ocmnode1
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check failed for "libaio-devel(x86_64)"
Check failed on nodes:
        ocmnode1
Package existence check passed for "nfs-utils"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP configuration file "/etc/ntp.conf" existence check passed
No NTP Daemons or Services were found to be running
PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
ocmnode1
Clock synchronization check using Network Time Protocol(NTP) failed

Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes


WARNING:
PRVF-5640 : Both 'search' and 'domain' entries are present in file "/etc/resolv.conf" on the following nodes: ocmnode1

Check for integrity of file "/etc/resolv.conf" passed

Time zone consistency check passed

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Checking daemon "avahi-daemon" is not configured and running
Daemon not configured check passed for process "avahi-daemon"
Daemon not running check passed for process "avahi-daemon"

Starting check for Reverse path filter setting ...

Check for Reverse path filter setting passed

Starting check for Network interface bonding status of private interconnect network interfaces ...

Check for Network interface bonding status of private interconnect network interfaces passed

Starting check for /dev/shm mounted as temporary file system ...

ERROR:

PRVE-0426 : The size of in-memory file system mounted as /dev/shm is "1002" megabytes which is less than the required size of "2048" megabytes on node ""

Check for /dev/shm mounted as temporary file system failed

Starting check for /boot mount ...

Check for /boot mount passed

Starting check for zeroconf check ...

ERROR:

PRVE-10077 : NOZEROCONF parameter was not  specified or was not set to 'yes' in file "/etc/sysconfig/network" on node "ocmnode1.localdomain"

Check for zeroconf check failed

Pre-check for cluster services setup was unsuccessful on all the nodes.
******************************************************************************************
Following is the list of fixable prerequisites selected to fix in this session
******************************************************************************************

--------------                ---------------     ----------------
Check failed.                 Failed on nodes     Reboot required?
--------------                ---------------     ----------------
zeroconf check                ocmnode1            no
Group Membership: dba         ocmnode1            no
OS Kernel Parameter:          ocmnode1            no
aio-max-nr
OS Kernel Parameter:          ocmnode1            no
panic_on_oops

Execute "/tmp/CVU_12.1.0.2.0_grid/runfixup.sh" as root user on nodes "ocmnode1" to perform the fix up operations manually
Press ENTER key to continue after execution of "/tmp/CVU_12.1.0.2.0_grid/runfixup.sh" has completed on nodes "ocmnode1"

Fix: zeroconf check
"zeroconf check" was successfully fixed on all the applicable nodes
Fix: Group Membership: dba
"Group Membership: dba" was successfully fixed on all the applicable nodes
Fix: OS Kernel Parameter: aio-max-nr
"OS Kernel Parameter: aio-max-nr" was successfully fixed on all the applicable nodes
Fix: OS Kernel Parameter: panic_on_oops
"OS Kernel Parameter: panic_on_oops" was successfully fixed on all the applicable nodes

Pre-check for node addition:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 bin]$ pwd
/u01/app/12.1.0/grid/bin
[grid@ocmnode2 bin]$ ./cluvfy stage -pre nodeadd -n ocmnode1
[grid@ocmnode2 bin]$ pwd /u01/app/12.1.0/grid/bin [grid@ocmnode2 bin]$ ./cluvfy stage -pre nodeadd -n ocmnode1
[grid@ocmnode2 bin]$ pwd
/u01/app/12.1.0/grid/bin
[grid@ocmnode2 bin]$ ./cluvfy stage -pre nodeadd -n ocmnode1
[grid@ocmnode2 bin]$ ./cluvfy stage -pre nodeadd -n ocmnode1

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 bin]$ ./cluvfy stage -pre nodeadd -n ocmnode1
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "ocmnode2"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking CRS integrity...
CRS integrity check passed
Clusterware version consistency passed.
Checking shared resources...
Checking CRS home location...
Path "/u01/app/12.1.0/grid" either already exists or can be successfully created on nodes: "ocmnode1"
Shared resources check for node addition passed
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.10.0"
Node connectivity passed for subnet "192.168.10.0" with node(s) ocmnode2,ocmnode1
TCP connectivity check passed for subnet "192.168.10.0"
Check: Node connectivity using interfaces on subnet "192.168.56.0"
Node connectivity passed for subnet "192.168.56.0" with node(s) ocmnode1,ocmnode2
TCP connectivity check passed for subnet "192.168.56.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "192.168.10.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Total memory check failed
Check failed on nodes:
ocmnode2,ocmnode1
Available memory check passed
Swap space check passed
Free disk space check passed for "ocmnode2:/usr,ocmnode2:/var,ocmnode2:/etc,ocmnode2:/u01/app/12.1.0/grid,ocmnode2:/sbin,ocmnode2:/tmp"
Free disk space check passed for "ocmnode1:/usr,ocmnode1:/var,ocmnode1:/etc,ocmnode1:/u01/app/12.1.0/grid,ocmnode1:/sbin,ocmnode1:/tmp"
Check for multiple users with UID value 54321 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Kernel parameter check passed for "panic_on_oops"
Package existence check passed for "binutils"
Package existence check failed for "compat-libcap1"
Check failed on nodes:
ocmnode1
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check failed for "ksh"
Check failed on nodes:
ocmnode2,ocmnode1
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check failed for "libaio-devel(x86_64)"
Check failed on nodes:
ocmnode1
Package existence check passed for "nfs-utils"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Group existence check passed for "asmadmin"
Membership check for user "grid" in group "asmadmin" passed
Group existence check passed for "asmoper"
Membership check for user "grid" in group "asmoper" passed
Group existence check passed for "asmdba"
Membership check for user "grid" in group "asmdba" passed
Group existence check passed for "oinstall"
Membership check for user "grid" in group "oinstall" passed
Check for multiple users with UID value 0 passed
User existence check passed for "root"
Check for multiple users with UID value 54321 passed
User existence check passed for "grid"
Checking ASMLib configuration.
Check for ASMLib configuration passed.
Checking OCR integrity...
Disks "+OCR/ocmnode-cluster/OCRFILE/registry.255.1047853917" are managed by ASM.
OCR integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Disks "ORCL:OCRDISK1,ORCL:OCRDISK2,ORCL:OCRDISK3" are managed by ASM.
Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP configuration file "/etc/ntp.conf" existence check passed
No NTP Daemons or Services were found to be running
PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
ocmnode2,ocmnode1
Clock synchronization check using Network Time Protocol(NTP) failed
User "grid" is not part of "root" group. Check passed
Checking integrity of file "/etc/resolv.conf" across nodes
WARNING:
PRVF-5640 : Both 'search' and 'domain' entries are present in file "/etc/resolv.conf" on the following nodes: ocmnode2,ocmnode1
Check for integrity of file "/etc/resolv.conf" passed
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Pre-check for node addition was unsuccessful on all the nodes.
[grid@ocmnode2 bin]$ ./cluvfy stage -pre nodeadd -n ocmnode1 Performing pre-checks for node addition Checking node reachability... Node reachability check passed from node "ocmnode2" Checking user equivalence... User equivalence check passed for user "grid" Checking CRS integrity... CRS integrity check passed Clusterware version consistency passed. Checking shared resources... Checking CRS home location... Path "/u01/app/12.1.0/grid" either already exists or can be successfully created on nodes: "ocmnode1" Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity using interfaces on subnet "192.168.10.0" Node connectivity passed for subnet "192.168.10.0" with node(s) ocmnode2,ocmnode1 TCP connectivity check passed for subnet "192.168.10.0" Check: Node connectivity using interfaces on subnet "192.168.56.0" Node connectivity passed for subnet "192.168.56.0" with node(s) ocmnode1,ocmnode2 TCP connectivity check passed for subnet "192.168.56.0" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.56.0". Subnet mask consistency check passed for subnet "192.168.10.0". Subnet mask consistency check passed. Node connectivity check passed Checking multicast communication... Checking subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251"... Check of subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251" passed. Check of multicast communication passed. Total memory check failed Check failed on nodes: ocmnode2,ocmnode1 Available memory check passed Swap space check passed Free disk space check passed for "ocmnode2:/usr,ocmnode2:/var,ocmnode2:/etc,ocmnode2:/u01/app/12.1.0/grid,ocmnode2:/sbin,ocmnode2:/tmp" Free disk space check passed for "ocmnode1:/usr,ocmnode1:/var,ocmnode1:/etc,ocmnode1:/u01/app/12.1.0/grid,ocmnode1:/sbin,ocmnode1:/tmp" Check for multiple users with UID value 54321 passed User existence check passed for "grid" Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Kernel parameter check passed for "panic_on_oops" Package existence check passed for "binutils" Package existence check failed for "compat-libcap1" Check failed on nodes: ocmnode1 Package existence check passed for "compat-libstdc++-33(x86_64)" Package existence check passed for "libgcc(x86_64)" Package existence check passed for "libstdc++(x86_64)" Package existence check passed for "libstdc++-devel(x86_64)" Package existence check passed for "sysstat" Package existence check passed for "gcc" Package existence check passed for "gcc-c++" Package existence check failed for "ksh" Check failed on nodes: ocmnode2,ocmnode1 Package existence check passed for "make" Package existence check passed for "glibc(x86_64)" Package existence check passed for "glibc-devel(x86_64)" Package existence check passed for "libaio(x86_64)" Package existence check failed for "libaio-devel(x86_64)" Check failed on nodes: ocmnode1 Package existence check passed for "nfs-utils" Check for multiple users with UID value 0 passed Current group ID check passed Starting check for consistency of primary group of root user Check for consistency of root user's primary group passed Group existence check passed for "asmadmin" Membership check for user "grid" in group "asmadmin" passed Group existence check passed for "asmoper" Membership check for user "grid" in group "asmoper" passed Group existence check passed for "asmdba" Membership check for user "grid" in group "asmdba" passed Group existence check passed for "oinstall" Membership check for user "grid" in group "oinstall" passed Check for multiple users with UID value 0 passed User existence check passed for "root" Check for multiple users with UID value 54321 passed User existence check passed for "grid" Checking ASMLib configuration. Check for ASMLib configuration passed. Checking OCR integrity... Disks "+OCR/ocmnode-cluster/OCRFILE/registry.255.1047853917" are managed by ASM. OCR integrity check passed Checking Oracle Cluster Voting Disk configuration... Disks "ORCL:OCRDISK1,ORCL:OCRDISK2,ORCL:OCRDISK3" are managed by ASM. Oracle Cluster Voting Disk configuration check passed Time zone consistency check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP configuration file "/etc/ntp.conf" existence check passed No NTP Daemons or Services were found to be running PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s): ocmnode2,ocmnode1 Clock synchronization check using Network Time Protocol(NTP) failed User "grid" is not part of "root" group. Check passed Checking integrity of file "/etc/resolv.conf" across nodes WARNING: PRVF-5640 : Both 'search' and 'domain' entries are present in file "/etc/resolv.conf" on the following nodes: ocmnode2,ocmnode1 Check for integrity of file "/etc/resolv.conf" passed Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ... Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed Pre-check for node addition was unsuccessful on all the nodes.
[grid@ocmnode2 bin]$ ./cluvfy stage -pre nodeadd -n ocmnode1

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "ocmnode2"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking CRS integrity...

CRS integrity check passed

Clusterware version consistency passed.

Checking shared resources...

Checking CRS home location...
Path "/u01/app/12.1.0/grid" either already exists or can be successfully created on nodes: "ocmnode1"
Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.10.0"
Node connectivity passed for subnet "192.168.10.0" with node(s) ocmnode2,ocmnode1
TCP connectivity check passed for subnet "192.168.10.0"


Check: Node connectivity using interfaces on subnet "192.168.56.0"
Node connectivity passed for subnet "192.168.56.0" with node(s) ocmnode1,ocmnode2
TCP connectivity check passed for subnet "192.168.56.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "192.168.10.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.
Total memory check failed
Check failed on nodes:
        ocmnode2,ocmnode1
Available memory check passed
Swap space check passed
Free disk space check passed for "ocmnode2:/usr,ocmnode2:/var,ocmnode2:/etc,ocmnode2:/u01/app/12.1.0/grid,ocmnode2:/sbin,ocmnode2:/tmp"
Free disk space check passed for "ocmnode1:/usr,ocmnode1:/var,ocmnode1:/etc,ocmnode1:/u01/app/12.1.0/grid,ocmnode1:/sbin,ocmnode1:/tmp"
Check for multiple users with UID value 54321 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Kernel parameter check passed for "panic_on_oops"
Package existence check passed for "binutils"
Package existence check failed for "compat-libcap1"
Check failed on nodes:
        ocmnode1
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check failed for "ksh"
Check failed on nodes:
        ocmnode2,ocmnode1
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check failed for "libaio-devel(x86_64)"
Check failed on nodes:
        ocmnode1
Package existence check passed for "nfs-utils"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed
Group existence check passed for "asmadmin"
Membership check for user "grid" in group "asmadmin" passed
Group existence check passed for "asmoper"
Membership check for user "grid" in group "asmoper" passed
Group existence check passed for "asmdba"
Membership check for user "grid" in group "asmdba" passed
Group existence check passed for "oinstall"
Membership check for user "grid" in group "oinstall" passed
Check for multiple users with UID value 0 passed
User existence check passed for "root"
Check for multiple users with UID value 54321 passed
User existence check passed for "grid"

Checking ASMLib configuration.
Check for ASMLib configuration passed.

Checking OCR integrity...
Disks "+OCR/ocmnode-cluster/OCRFILE/registry.255.1047853917" are managed by ASM.

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...
Disks "ORCL:OCRDISK1,ORCL:OCRDISK2,ORCL:OCRDISK3" are managed by ASM.

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP configuration file "/etc/ntp.conf" existence check passed
No NTP Daemons or Services were found to be running
PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
ocmnode2,ocmnode1
Clock synchronization check using Network Time Protocol(NTP) failed


User "grid" is not part of "root" group. Check passed
Checking integrity of file "/etc/resolv.conf" across nodes


WARNING:
PRVF-5640 : Both 'search' and 'domain' entries are present in file "/etc/resolv.conf" on the following nodes: ocmnode2,ocmnode1

Check for integrity of file "/etc/resolv.conf" passed


Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Pre-check for node addition was unsuccessful on all the nodes.

Run below command from workable node.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip} -ignoreSysPrereqs -ignorePrereq
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 15883 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4029 MB Passed
Prepare Configuration in progress.
Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2020-08-08_10-48-12PM.log
Instantiate files in progress.
Instantiate files successful.
.................................................. 14% Done.
Copying files to node in progress.
Copying files to node successful.
.................................................. 73% Done.
Saving cluster inventory in progress.
.................................................. 80% Done.
Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/12.1.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
Setup Oracle Base in progress.
Setup Oracle Base successful.
.................................................. 88% Done.
As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/12.1.0/grid/root.sh
Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[ocmnode1]
Execute /u01/app/12.1.0/grid/root.sh on the following nodes:
[ocmnode1]
The scripts can be executed in parallel on all the nodes.
..........
Update Inventory in progress.
.................................................. 100% Done.
Update Inventory successful.
Successfully Setup Software.
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip} -ignoreSysPrereqs -ignorePrereq Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 15883 MB Passed Checking swap space: must be greater than 150 MB. Actual 4029 MB Passed Prepare Configuration in progress. Prepare Configuration successful. .................................................. 8% Done. You can find the log of this install session at: /u01/app/oraInventory/logs/addNodeActions2020-08-08_10-48-12PM.log Instantiate files in progress. Instantiate files successful. .................................................. 14% Done. Copying files to node in progress. Copying files to node successful. .................................................. 73% Done. Saving cluster inventory in progress. .................................................. 80% Done. Saving cluster inventory successful. The Cluster Node Addition of /u01/app/12.1.0/grid was successful. Please check '/tmp/silentInstall.log' for more details. Setup Oracle Base in progress. Setup Oracle Base successful. .................................................. 88% Done. As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/12.1.0/grid/root.sh Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: [ocmnode1] Execute /u01/app/12.1.0/grid/root.sh on the following nodes: [ocmnode1] The scripts can be executed in parallel on all the nodes. .......... Update Inventory in progress. .................................................. 100% Done. Update Inventory successful. Successfully Setup Software.
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip} -ignoreSysPrereqs -ignorePrereq
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 15883 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4029 MB    Passed

Prepare Configuration in progress.

Prepare Configuration successful.
..................................................   8% Done.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/addNodeActions2020-08-08_10-48-12PM.log

Instantiate files in progress.

Instantiate files successful.
..................................................   14% Done.

Copying files to node in progress.

Copying files to node successful.
..................................................   73% Done.

Saving cluster inventory in progress.
..................................................   80% Done.

Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/12.1.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   88% Done.

As a root user, execute the following script(s):
        1. /u01/app/oraInventory/orainstRoot.sh
        2. /u01/app/12.1.0/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[ocmnode1]
Execute /u01/app/12.1.0/grid/root.sh on the following nodes:
[ocmnode1]

The scripts can be executed in parallel on all the nodes.

..........
Update Inventory in progress.
..................................................   100% Done.

Update Inventory successful.
Successfully Setup Software.
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[root@ocmnode1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@ocmnode1 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete.
[root@ocmnode1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[root@ocmnode1 ~]# /u01/app/12.1.0/grid/root.sh
Check /u01/app/12.1.0/grid/install/root_ocmnode1.localdomain_2020-08-10_01-35-07.log for t he output of root script
[root@ocmnode1 ~]# /u01/app/12.1.0/grid/root.sh Check /u01/app/12.1.0/grid/install/root_ocmnode1.localdomain_2020-08-10_01-35-07.log for t he output of root script
[root@ocmnode1 ~]# /u01/app/12.1.0/grid/root.sh
Check /u01/app/12.1.0/grid/install/root_ocmnode1.localdomain_2020-08-10_01-35-07.log for t                                                          he output of root script
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 ~]$ cat root_ocmnode1.localdomain_2020-08-10_01-35-07.log
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/12.1.0/grid
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2020/08/10 01:35:08 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2020/08/10 01:35:08 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/08/10 01:35:09 CLSRSC-363: User ignored prerequisites during installation
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.crsd' on 'ocmnode1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'ocmnode1'
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'ocmnode1'
CRS-2677: Stop of 'ora.OCR.dg' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'ocmnode1'
CRS-2677: Stop of 'ora.DATA.dg' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'ocmnode1'
CRS-2677: Stop of 'ora.asm' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'ocmnode2'
CRS-2676: Start of 'ora.scan1.vip' on 'ocmnode2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode2'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'ocmnode1'
CRS-2677: Stop of 'ora.ons' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'ocmnode1'
CRS-2677: Stop of 'ora.net1.network' on 'ocmnode1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'ocmnode1' has completed
CRS-2677: Stop of 'ora.crsd' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'ocmnode1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.storage' on 'ocmnode1'
CRS-2677: Stop of 'ora.storage' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'ocmnode1'
CRS-2677: Stop of 'ora.ctssd' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.asm' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'ocmnode1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'ocmnode1'
CRS-2677: Stop of 'ora.cssd' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'ocmnode1'
CRS-2677: Stop of 'ora.crf' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'ocmnode1'
CRS-2677: Stop of 'ora.gipcd' on 'ocmnode1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ocmnode1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'ocmnode1'
CRS-2672: Attempting to start 'ora.evmd' on 'ocmnode1'
CRS-2676: Start of 'ora.evmd' on 'ocmnode1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ocmnode1'
CRS-2676: Start of 'ora.gpnpd' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'ocmnode1'
CRS-2676: Start of 'ora.gipcd' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ocmnode1'
CRS-2676: Start of 'ora.cssdmonitor' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ocmnode1'
CRS-2672: Attempting to start 'ora.diskmon' on 'ocmnode1'
CRS-2676: Start of 'ora.diskmon' on 'ocmnode1' succeeded
CRS-2676: Start of 'ora.cssd' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'ocmnode1'
CRS-2672: Attempting to start 'ora.ctssd' on 'ocmnode1'
CRS-2676: Start of 'ora.ctssd' on 'ocmnode1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'ocmnode1'
CRS-2676: Start of 'ora.asm' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'ocmnode1'
CRS-2676: Start of 'ora.storage' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'ocmnode1'
CRS-2676: Start of 'ora.crf' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'ocmnode1'
CRS-2676: Start of 'ora.crsd' on 'ocmnode1' succeeded
CRS-6017: Processing resource auto-start for servers: ocmnode1
CRS-2672: Attempting to start 'ora.net1.network' on 'ocmnode1'
CRS-2676: Start of 'ora.net1.network' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'ocmnode1'
CRS-2676: Start of 'ora.ons' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'ocmnode2'
CRS-2677: Stop of 'ora.scan1.vip' on 'ocmnode2' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'ocmnode1'
CRS-2676: Start of 'ora.scan1.vip' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode1'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode1' succeeded
CRS-6016: Resource auto-start has completed for server ocmnode1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2020/08/10 01:37:21 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
2020/08/10 01:37:38 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[grid@ocmnode2 ~]$ cat root_ocmnode1.localdomain_2020-08-10_01-35-07.log Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/12.1.0/grid Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params 2020/08/10 01:35:08 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. 2020/08/10 01:35:08 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2020/08/10 01:35:09 CLSRSC-363: User ignored prerequisites during installation CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'ocmnode1' CRS-2673: Attempting to stop 'ora.crsd' on 'ocmnode1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'ocmnode1' CRS-2673: Attempting to stop 'ora.OCR.dg' on 'ocmnode1' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode1' CRS-2673: Attempting to stop 'ora.FRA.dg' on 'ocmnode1' CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'ocmnode1' CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'ocmnode1' succeeded CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode1' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on 'ocmnode1' CRS-2677: Stop of 'ora.OCR.dg' on 'ocmnode1' succeeded CRS-2677: Stop of 'ora.FRA.dg' on 'ocmnode1' succeeded CRS-2673: Attempting to stop 'ora.DATA.dg' on 'ocmnode1' CRS-2677: Stop of 'ora.DATA.dg' on 'ocmnode1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'ocmnode1' CRS-2677: Stop of 'ora.asm' on 'ocmnode1' succeeded CRS-2677: Stop of 'ora.scan1.vip' on 'ocmnode1' succeeded CRS-2672: Attempting to start 'ora.scan1.vip' on 'ocmnode2' CRS-2676: Start of 'ora.scan1.vip' on 'ocmnode2' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode2' CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode2' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'ocmnode1' CRS-2677: Stop of 'ora.ons' on 'ocmnode1' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'ocmnode1' CRS-2677: Stop of 'ora.net1.network' on 'ocmnode1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'ocmnode1' has completed CRS-2677: Stop of 'ora.crsd' on 'ocmnode1' succeeded CRS-2673: Attempting to stop 'ora.evmd' on 'ocmnode1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'ocmnode1' CRS-2673: Attempting to stop 'ora.gpnpd' on 'ocmnode1' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'ocmnode1' CRS-2677: Stop of 'ora.drivers.acfs' on 'ocmnode1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'ocmnode1' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'ocmnode1' succeeded CRS-2677: Stop of 'ora.evmd' on 'ocmnode1' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'ocmnode1' CRS-2673: Attempting to stop 'ora.storage' on 'ocmnode1' CRS-2677: Stop of 'ora.storage' on 'ocmnode1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'ocmnode1' CRS-2677: Stop of 'ora.ctssd' on 'ocmnode1' succeeded CRS-2677: Stop of 'ora.asm' on 'ocmnode1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'ocmnode1' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'ocmnode1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'ocmnode1' CRS-2677: Stop of 'ora.cssd' on 'ocmnode1' succeeded CRS-2673: Attempting to stop 'ora.crf' on 'ocmnode1' CRS-2677: Stop of 'ora.crf' on 'ocmnode1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'ocmnode1' CRS-2677: Stop of 'ora.gipcd' on 'ocmnode1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ocmnode1' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on 'ocmnode1' CRS-2672: Attempting to start 'ora.evmd' on 'ocmnode1' CRS-2676: Start of 'ora.evmd' on 'ocmnode1' succeeded CRS-2676: Start of 'ora.mdnsd' on 'ocmnode1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'ocmnode1' CRS-2676: Start of 'ora.gpnpd' on 'ocmnode1' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'ocmnode1' CRS-2676: Start of 'ora.gipcd' on 'ocmnode1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ocmnode1' CRS-2676: Start of 'ora.cssdmonitor' on 'ocmnode1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'ocmnode1' CRS-2672: Attempting to start 'ora.diskmon' on 'ocmnode1' CRS-2676: Start of 'ora.diskmon' on 'ocmnode1' succeeded CRS-2676: Start of 'ora.cssd' on 'ocmnode1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'ocmnode1' CRS-2672: Attempting to start 'ora.ctssd' on 'ocmnode1' CRS-2676: Start of 'ora.ctssd' on 'ocmnode1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'ocmnode1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'ocmnode1' CRS-2676: Start of 'ora.asm' on 'ocmnode1' succeeded CRS-2672: Attempting to start 'ora.storage' on 'ocmnode1' CRS-2676: Start of 'ora.storage' on 'ocmnode1' succeeded CRS-2672: Attempting to start 'ora.crf' on 'ocmnode1' CRS-2676: Start of 'ora.crf' on 'ocmnode1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'ocmnode1' CRS-2676: Start of 'ora.crsd' on 'ocmnode1' succeeded CRS-6017: Processing resource auto-start for servers: ocmnode1 CRS-2672: Attempting to start 'ora.net1.network' on 'ocmnode1' CRS-2676: Start of 'ora.net1.network' on 'ocmnode1' succeeded CRS-2672: Attempting to start 'ora.ons' on 'ocmnode1' CRS-2676: Start of 'ora.ons' on 'ocmnode1' succeeded CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode2' CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode2' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on 'ocmnode2' CRS-2677: Stop of 'ora.scan1.vip' on 'ocmnode2' succeeded CRS-2672: Attempting to start 'ora.scan1.vip' on 'ocmnode1' CRS-2676: Start of 'ora.scan1.vip' on 'ocmnode1' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode1' CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode1' succeeded CRS-6016: Resource auto-start has completed for server ocmnode1 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2020/08/10 01:37:21 CLSRSC-343: Successfully started Oracle Clusterware stack clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 12c Release 1. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Preparing packages for installation... cvuqdisk-1.0.9-1 2020/08/10 01:37:38 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[grid@ocmnode2 ~]$ cat root_ocmnode1.localdomain_2020-08-10_01-35-07.log 
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/12.1.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2020/08/10 01:35:08 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2020/08/10 01:35:08 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2020/08/10 01:35:09 CLSRSC-363: User ignored prerequisites during installation

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.crsd' on 'ocmnode1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'ocmnode1'
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'ocmnode1'
CRS-2677: Stop of 'ora.OCR.dg' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'ocmnode1'
CRS-2677: Stop of 'ora.DATA.dg' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'ocmnode1'
CRS-2677: Stop of 'ora.asm' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'ocmnode2'
CRS-2676: Start of 'ora.scan1.vip' on 'ocmnode2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode2'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'ocmnode1'
CRS-2677: Stop of 'ora.ons' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'ocmnode1'
CRS-2677: Stop of 'ora.net1.network' on 'ocmnode1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'ocmnode1' has completed
CRS-2677: Stop of 'ora.crsd' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'ocmnode1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'ocmnode1'
CRS-2673: Attempting to stop 'ora.storage' on 'ocmnode1'
CRS-2677: Stop of 'ora.storage' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'ocmnode1'
CRS-2677: Stop of 'ora.ctssd' on 'ocmnode1' succeeded
CRS-2677: Stop of 'ora.asm' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'ocmnode1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'ocmnode1'
CRS-2677: Stop of 'ora.cssd' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'ocmnode1'
CRS-2677: Stop of 'ora.crf' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'ocmnode1'
CRS-2677: Stop of 'ora.gipcd' on 'ocmnode1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ocmnode1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'ocmnode1'
CRS-2672: Attempting to start 'ora.evmd' on 'ocmnode1'
CRS-2676: Start of 'ora.evmd' on 'ocmnode1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ocmnode1'
CRS-2676: Start of 'ora.gpnpd' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'ocmnode1'
CRS-2676: Start of 'ora.gipcd' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ocmnode1'
CRS-2676: Start of 'ora.cssdmonitor' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ocmnode1'
CRS-2672: Attempting to start 'ora.diskmon' on 'ocmnode1'
CRS-2676: Start of 'ora.diskmon' on 'ocmnode1' succeeded
CRS-2676: Start of 'ora.cssd' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'ocmnode1'
CRS-2672: Attempting to start 'ora.ctssd' on 'ocmnode1'
CRS-2676: Start of 'ora.ctssd' on 'ocmnode1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'ocmnode1'
CRS-2676: Start of 'ora.asm' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'ocmnode1'
CRS-2676: Start of 'ora.storage' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'ocmnode1'
CRS-2676: Start of 'ora.crf' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'ocmnode1'
CRS-2676: Start of 'ora.crsd' on 'ocmnode1' succeeded
CRS-6017: Processing resource auto-start for servers: ocmnode1
CRS-2672: Attempting to start 'ora.net1.network' on 'ocmnode1'
CRS-2676: Start of 'ora.net1.network' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'ocmnode1'
CRS-2676: Start of 'ora.ons' on 'ocmnode1' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'ocmnode2'
CRS-2677: Stop of 'ora.scan1.vip' on 'ocmnode2' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'ocmnode1'
CRS-2676: Start of 'ora.scan1.vip' on 'ocmnode1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode1'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'ocmnode1' succeeded
CRS-6016: Resource auto-start has completed for server ocmnode1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2020/08/10 01:37:21 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
2020/08/10 01:37:38 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Node added successfully with Clusterware.

Validation:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE ocmnode1 STABLE
ONLINE ONLINE ocmnode2 STABLE
ora.FRA.dg
ONLINE ONLINE ocmnode1 STABLE
ONLINE ONLINE ocmnode2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE ocmnode1 STABLE
ONLINE ONLINE ocmnode2 STABLE
ora.OCR.dg
ONLINE ONLINE ocmnode1 STABLE
ONLINE ONLINE ocmnode2 STABLE
ora.OCR_VOTE.dg
ONLINE ONLINE ocmnode1 STABLE
ONLINE ONLINE ocmnode2 STABLE
ora.asm
ONLINE ONLINE ocmnode1 Started,STABLE
ONLINE ONLINE ocmnode2 Started,STABLE
ora.net1.network
ONLINE ONLINE ocmnode1 STABLE
ONLINE ONLINE ocmnode2 STABLE
ora.ons
ONLINE ONLINE ocmnode1 STABLE
ONLINE ONLINE ocmnode2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE ocmnode1 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE ocmnode2 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE ocmnode2 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE ocmnode2 169.254.43.148 192.1
68.10.11,STABLE
ora.cvu
1 ONLINE ONLINE ocmnode2 STABLE
ora.mgmtdb
1 ONLINE ONLINE ocmnode2 Open,STABLE
ora.oc4j
1 ONLINE ONLINE ocmnode2 STABLE
ora.ocmnode1.vip
1 ONLINE ONLINE ocmnode1 STABLE
ora.ocmnode2.vip
1 ONLINE ONLINE ocmnode2 STABLE
ora.scan1.vip
1 ONLINE ONLINE ocmnode1 STABLE
ora.scan2.vip
1 ONLINE ONLINE ocmnode2 STABLE
ora.scan3.vip
1 ONLINE ONLINE ocmnode2 STABLE
--------------------------------------------------------------------------------
[grid@ocmnode2 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE ocmnode1 STABLE ONLINE ONLINE ocmnode2 STABLE ora.FRA.dg ONLINE ONLINE ocmnode1 STABLE ONLINE ONLINE ocmnode2 STABLE ora.LISTENER.lsnr ONLINE ONLINE ocmnode1 STABLE ONLINE ONLINE ocmnode2 STABLE ora.OCR.dg ONLINE ONLINE ocmnode1 STABLE ONLINE ONLINE ocmnode2 STABLE ora.OCR_VOTE.dg ONLINE ONLINE ocmnode1 STABLE ONLINE ONLINE ocmnode2 STABLE ora.asm ONLINE ONLINE ocmnode1 Started,STABLE ONLINE ONLINE ocmnode2 Started,STABLE ora.net1.network ONLINE ONLINE ocmnode1 STABLE ONLINE ONLINE ocmnode2 STABLE ora.ons ONLINE ONLINE ocmnode1 STABLE ONLINE ONLINE ocmnode2 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE ocmnode1 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE ocmnode2 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE ocmnode2 STABLE ora.MGMTLSNR 1 ONLINE ONLINE ocmnode2 169.254.43.148 192.1 68.10.11,STABLE ora.cvu 1 ONLINE ONLINE ocmnode2 STABLE ora.mgmtdb 1 ONLINE ONLINE ocmnode2 Open,STABLE ora.oc4j 1 ONLINE ONLINE ocmnode2 STABLE ora.ocmnode1.vip 1 ONLINE ONLINE ocmnode1 STABLE ora.ocmnode2.vip 1 ONLINE ONLINE ocmnode2 STABLE ora.scan1.vip 1 ONLINE ONLINE ocmnode1 STABLE ora.scan2.vip 1 ONLINE ONLINE ocmnode2 STABLE ora.scan3.vip 1 ONLINE ONLINE ocmnode2 STABLE --------------------------------------------------------------------------------
[grid@ocmnode2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       ocmnode1                 STABLE
               ONLINE  ONLINE       ocmnode2                 STABLE
ora.FRA.dg
               ONLINE  ONLINE       ocmnode1                 STABLE
               ONLINE  ONLINE       ocmnode2                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       ocmnode1                 STABLE
               ONLINE  ONLINE       ocmnode2                 STABLE
ora.OCR.dg
               ONLINE  ONLINE       ocmnode1                 STABLE
               ONLINE  ONLINE       ocmnode2                 STABLE
ora.OCR_VOTE.dg
               ONLINE  ONLINE       ocmnode1                 STABLE
               ONLINE  ONLINE       ocmnode2                 STABLE
ora.asm
               ONLINE  ONLINE       ocmnode1                 Started,STABLE
               ONLINE  ONLINE       ocmnode2                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       ocmnode1                 STABLE
               ONLINE  ONLINE       ocmnode2                 STABLE
ora.ons
               ONLINE  ONLINE       ocmnode1                 STABLE
               ONLINE  ONLINE       ocmnode2                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       ocmnode1                 STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       ocmnode2                 STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       ocmnode2                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       ocmnode2                 169.254.43.148 192.1
                                                             68.10.11,STABLE
ora.cvu
      1        ONLINE  ONLINE       ocmnode2                 STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       ocmnode2                 Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       ocmnode2                 STABLE
ora.ocmnode1.vip
      1        ONLINE  ONLINE       ocmnode1                 STABLE
ora.ocmnode2.vip
      1        ONLINE  ONLINE       ocmnode2                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       ocmnode1                 STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       ocmnode2                 STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       ocmnode2                 STABLE
--------------------------------------------------------------------------------

Adding Node to Oracle Home:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[oracle@ocmnode2 addnode]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/addnode
[oracle@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}"
[oracle@ocmnode2 addnode]$ pwd /u01/app/oracle/product/12.1.0/dbhome_1/addnode [oracle@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}"
[oracle@ocmnode2 addnode]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/addnode
[oracle@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}"
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[oracle@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}"
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 12822 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4031 MB Passed
[FATAL] [INS-30160] Installer has detected that the nodes [ocmnode1] specified for addnode operation have uncleaned inventory.
ACTION: Ensure that the inventory location /u01/app/oraInventory is cleaned before performing addnode procedure.
[oracle@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}"
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 12822 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4031 MB Passed
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
Prepare Configuration in progress.
Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log
Instantiate files in progress.
Instantiate files successful.
.................................................. 14% Done.
Copying files to node in progress.
Copying files to node successful.
.................................................. 73% Done.
Saving cluster inventory in progress.
SEVERE:Remote 'AttachHome' failed on nodes: 'ocmnode1'. Refer to '/u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes:
/u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 ORACLE_HOME_NAME=OraDB12Home1 CLUSTER_NODES=ocmnode2,ocmnode1 "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=<node on which command is to be run>.
Please refer 'AttachHome' logs under central inventory of remote nodes where failure occurred for more details.
SEVERE:Remote 'UpdateNodeList' failed on nodes: 'ocmnode1'. Refer to '/u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes:
/u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 CLUSTER_NODES=ocmnode2,ocmnode1 CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=<node on which command is to be run>.
Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details.
.................................................. 80% Done.
Saving cluster inventory successful.
WARNING:OUI-10234:Failed to copy the root script, /u01/app/oraInventory/orainstRoot.sh to the cluster nodes ocmnode1.[Error in copying file '/u01/app/oraInventory/orainstRoot.sh' present inside directory '/' on nodes 'ocmnode1'. [PRKC-1080 : Failed to transfer file "/u01/app/oraInventory/orainstRoot.sh" to any of the given nodes "ocmnode1 ".
Error on node ocmnode1:/bin/tar: ./u01/app/oraInventory/orainstRoot.sh: Cannot open: No such file or directory
/bin/tar: Exiting with failure status due to previous errors]]
Please copy them manually to these nodes and execute the script.
The Cluster Node Addition of /u01/app/oracle/product/12.1.0/dbhome_1 was unsuccessful.
Please check '/tmp/silentInstall.log' for more details.
Setup Oracle Base in progress.
Setup Oracle Base successful.
.................................................. 88% Done.
As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/oracle/product/12.1.0/dbhome_1/root.sh
Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[ocmnode1]
Execute /u01/app/oracle/product/12.1.0/dbhome_1/root.sh on the following nodes:
[ocmnode1]
..........
Update Inventory in progress.
SEVERE:Remote 'UpdateNodeList' failed on nodes: 'ocmnode1'. Refer to '/u01/app/oraInventory/logs/UpdateNodeList2020-08-10_03-17-13PM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes:
/u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 CLUSTER_NODES=ocmnode2,ocmnode1 "NODES_TO_SET={ocmnode2,ocmnode1}" CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=<node on which command is to be run>.
Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details.
[WARNING] [INS-10016] Installer failed to update the cluster related details, for this Oracle home, in the inventory on all/some of the nodes
ACTION: You may chose to retry the operation, without continuing further. Alternatively you can refer to information given below and manually execute the mentioned commands on the failed nodes now or later to update the inventory.
*MORE DETAILS*
Execute the following command on node(s) [ocmnode1]:
/u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -jreLoc /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre -paramFile /u01/app/oracle/product/12.1.0/dbhome_1/oui/clusterparam.ini -silent -ignoreSysPrereqs -updateNodeList -bigCluster ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 CLUSTER_NODES=<Local Node> "NODES_TO_SET={ocmnode2,ocmnode1}" -invPtrLoc "/u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc" -local
.................................................. 100% Done.
Update Inventory successful.
Successfully Setup Software.
[oracle@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}" Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 12822 MB Passed Checking swap space: must be greater than 150 MB. Actual 4031 MB Passed [FATAL] [INS-30160] Installer has detected that the nodes [ocmnode1] specified for addnode operation have uncleaned inventory. ACTION: Ensure that the inventory location /u01/app/oraInventory is cleaned before performing addnode procedure. [oracle@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}" Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 12822 MB Passed Checking swap space: must be greater than 150 MB. Actual 4031 MB Passed [WARNING] [INS-13014] Target environment does not meet some optional requirements. CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually. Prepare Configuration in progress. Prepare Configuration successful. .................................................. 8% Done. You can find the log of this install session at: /u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log Instantiate files in progress. Instantiate files successful. .................................................. 14% Done. Copying files to node in progress. Copying files to node successful. .................................................. 73% Done. Saving cluster inventory in progress. SEVERE:Remote 'AttachHome' failed on nodes: 'ocmnode1'. Refer to '/u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log' for details. It is recommended that the following command needs to be manually run on the failed nodes: /u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 ORACLE_HOME_NAME=OraDB12Home1 CLUSTER_NODES=ocmnode2,ocmnode1 "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=<node on which command is to be run>. Please refer 'AttachHome' logs under central inventory of remote nodes where failure occurred for more details. SEVERE:Remote 'UpdateNodeList' failed on nodes: 'ocmnode1'. Refer to '/u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log' for details. It is recommended that the following command needs to be manually run on the failed nodes: /u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 CLUSTER_NODES=ocmnode2,ocmnode1 CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=<node on which command is to be run>. Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details. .................................................. 80% Done. Saving cluster inventory successful. WARNING:OUI-10234:Failed to copy the root script, /u01/app/oraInventory/orainstRoot.sh to the cluster nodes ocmnode1.[Error in copying file '/u01/app/oraInventory/orainstRoot.sh' present inside directory '/' on nodes 'ocmnode1'. [PRKC-1080 : Failed to transfer file "/u01/app/oraInventory/orainstRoot.sh" to any of the given nodes "ocmnode1 ". Error on node ocmnode1:/bin/tar: ./u01/app/oraInventory/orainstRoot.sh: Cannot open: No such file or directory /bin/tar: Exiting with failure status due to previous errors]] Please copy them manually to these nodes and execute the script. The Cluster Node Addition of /u01/app/oracle/product/12.1.0/dbhome_1 was unsuccessful. Please check '/tmp/silentInstall.log' for more details. Setup Oracle Base in progress. Setup Oracle Base successful. .................................................. 88% Done. As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/oracle/product/12.1.0/dbhome_1/root.sh Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: [ocmnode1] Execute /u01/app/oracle/product/12.1.0/dbhome_1/root.sh on the following nodes: [ocmnode1] .......... Update Inventory in progress. SEVERE:Remote 'UpdateNodeList' failed on nodes: 'ocmnode1'. Refer to '/u01/app/oraInventory/logs/UpdateNodeList2020-08-10_03-17-13PM.log' for details. It is recommended that the following command needs to be manually run on the failed nodes: /u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 CLUSTER_NODES=ocmnode2,ocmnode1 "NODES_TO_SET={ocmnode2,ocmnode1}" CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=<node on which command is to be run>. Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details. [WARNING] [INS-10016] Installer failed to update the cluster related details, for this Oracle home, in the inventory on all/some of the nodes ACTION: You may chose to retry the operation, without continuing further. Alternatively you can refer to information given below and manually execute the mentioned commands on the failed nodes now or later to update the inventory. *MORE DETAILS* Execute the following command on node(s) [ocmnode1]: /u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -jreLoc /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre -paramFile /u01/app/oracle/product/12.1.0/dbhome_1/oui/clusterparam.ini -silent -ignoreSysPrereqs -updateNodeList -bigCluster ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 CLUSTER_NODES=<Local Node> "NODES_TO_SET={ocmnode2,ocmnode1}" -invPtrLoc "/u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc" -local .................................................. 100% Done. Update Inventory successful. Successfully Setup Software.
[oracle@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 12822 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4031 MB    Passed
[FATAL] [INS-30160] Installer has detected that the nodes [ocmnode1] specified for addnode operation have uncleaned inventory.
   ACTION: Ensure that the inventory location /u01/app/oraInventory is cleaned before performing addnode procedure.
[oracle@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 12822 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4031 MB    Passed
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

Prepare Configuration in progress.

Prepare Configuration successful.
..................................................   8% Done.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log

Instantiate files in progress.

Instantiate files successful.
..................................................   14% Done.

Copying files to node in progress.

Copying files to node successful.
..................................................   73% Done.

Saving cluster inventory in progress.
SEVERE:Remote 'AttachHome' failed on nodes: 'ocmnode1'. Refer to '/u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes:
 /u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 ORACLE_HOME_NAME=OraDB12Home1 CLUSTER_NODES=ocmnode2,ocmnode1 "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=<node on which command is to be run>.
Please refer 'AttachHome' logs under central inventory of remote nodes where failure occurred for more details.
SEVERE:Remote 'UpdateNodeList' failed on nodes: 'ocmnode1'. Refer to '/u01/app/oraInventory/logs/addNodeActions2020-08-10_03-17-13PM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes:
 /u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 CLUSTER_NODES=ocmnode2,ocmnode1 CRS=false  "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=<node on which command is to be run>.
Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details.
..................................................   80% Done.

Saving cluster inventory successful.
WARNING:OUI-10234:Failed to copy the root script, /u01/app/oraInventory/orainstRoot.sh to the cluster nodes ocmnode1.[Error in copying file '/u01/app/oraInventory/orainstRoot.sh' present inside directory '/' on nodes 'ocmnode1'. [PRKC-1080 : Failed to transfer file "/u01/app/oraInventory/orainstRoot.sh" to any of the given nodes "ocmnode1 ".
Error on node ocmnode1:/bin/tar: ./u01/app/oraInventory/orainstRoot.sh: Cannot open: No such file or directory
/bin/tar: Exiting with failure status due to previous errors]]
 Please copy them manually to these nodes and execute the script.
The Cluster Node Addition of /u01/app/oracle/product/12.1.0/dbhome_1 was unsuccessful.
Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   88% Done.

As a root user, execute the following script(s):
        1. /u01/app/oraInventory/orainstRoot.sh
        2. /u01/app/oracle/product/12.1.0/dbhome_1/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[ocmnode1]
Execute /u01/app/oracle/product/12.1.0/dbhome_1/root.sh on the following nodes:
[ocmnode1]


..........
Update Inventory in progress.
SEVERE:Remote 'UpdateNodeList' failed on nodes: 'ocmnode1'. Refer to '/u01/app/oraInventory/logs/UpdateNodeList2020-08-10_03-17-13PM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes:
 /u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 CLUSTER_NODES=ocmnode2,ocmnode1 "NODES_TO_SET={ocmnode2,ocmnode1}" CRS=false  "INVENTORY_LOCATION=/u01/app/oraInventory" -invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=<node on which command is to be run>.
Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details.
[WARNING] [INS-10016] Installer failed to update the cluster related details, for this Oracle home, in the inventory on all/some of the nodes
   ACTION: You may chose to retry the operation, without continuing further. Alternatively you can refer to information given below and manually execute the mentioned commands on the failed nodes now or later to update the inventory.
*MORE DETAILS*

Execute the following command on node(s) [ocmnode1]:
/u01/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller -jreLoc /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre -paramFile /u01/app/oracle/product/12.1.0/dbhome_1/oui/clusterparam.ini -silent -ignoreSysPrereqs -updateNodeList -bigCluster ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 CLUSTER_NODES=<Local Node> "NODES_TO_SET={ocmnode2,ocmnode1}" -invPtrLoc "/u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc" -local

..................................................   100% Done.

Update Inventory successful.
Successfully Setup Software.

Execute root.sh script on new node.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[root@ocmnode1 app]# /u01/app/oracle/product/12.1.0/dbhome_1/root.sh
Check /u01/app/oracle/product/12.1.0/dbhome_1/install/root_ocmnode1.localdomain_2020-08-10_15-27-42.log for the output of root script
[root@ocmnode1 app]# cat /u01/app/oracle/product/12.1.0/dbhome_1/install/root_ocmnode1.localdomain_2020-08-10_15-27-42.log
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/dbhome_1
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@ocmnode1 app]# /u01/app/oracle/product/12.1.0/dbhome_1/root.sh Check /u01/app/oracle/product/12.1.0/dbhome_1/install/root_ocmnode1.localdomain_2020-08-10_15-27-42.log for the output of root script [root@ocmnode1 app]# cat /u01/app/oracle/product/12.1.0/dbhome_1/install/root_ocmnode1.localdomain_2020-08-10_15-27-42.log Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/12.1.0/dbhome_1 Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed.
[root@ocmnode1 app]# /u01/app/oracle/product/12.1.0/dbhome_1/root.sh
Check /u01/app/oracle/product/12.1.0/dbhome_1/install/root_ocmnode1.localdomain_2020-08-10_15-27-42.log for the output of root script

[root@ocmnode1 app]# cat /u01/app/oracle/product/12.1.0/dbhome_1/install/root_ocmnode1.localdomain_2020-08-10_15-27-42.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.1.0/dbhome_1
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

During adding a Node, I faced number of issues. I mentioned in below and it may help you.

Issue # 1: ORA-00845: MEMORY_TARGET not supported on this system

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip} -ignoreSysPrereqs -ignorePrereq
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip} -ignoreSysPrereqs -ignorePrereq
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip} -ignoreSysPrereqs -ignorePrereq

Got below error which running root.sh script.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
ORA-00845: MEMORY_TARGET not supported on this system
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/ocmnode1/crs/trace/ohasd_oraagent_grid.trc".
CRS-2883: Resource 'ora.asm' failed during Clusterware stack start.
CRS-4406: Oracle High Availability Services synchronous start failed.
CRS-4000: Command Start failed, or completed with errors.
2020/08/08 22:55:25 CLSRSC-117: Failed to start Oracle Clusterware stack
Died at /u01/app/12.1.0/grid/crs/install/crsinstall.pm line 914.
The command '/u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/rootcrs.pl ' execution failed
ORA-00845: MEMORY_TARGET not supported on this system . For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/ocmnode1/crs/trace/ohasd_oraagent_grid.trc". CRS-2883: Resource 'ora.asm' failed during Clusterware stack start. CRS-4406: Oracle High Availability Services synchronous start failed. CRS-4000: Command Start failed, or completed with errors. 2020/08/08 22:55:25 CLSRSC-117: Failed to start Oracle Clusterware stack Died at /u01/app/12.1.0/grid/crs/install/crsinstall.pm line 914. The command '/u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/rootcrs.pl ' execution failed
ORA-00845: MEMORY_TARGET not supported on this system
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/ocmnode1/crs/trace/ohasd_oraagent_grid.trc".
CRS-2883: Resource 'ora.asm' failed during Clusterware stack start.
CRS-4406: Oracle High Availability Services synchronous start failed.
CRS-4000: Command Start failed, or completed with errors.
2020/08/08 22:55:25 CLSRSC-117: Failed to start Oracle Clusterware stack

Died at /u01/app/12.1.0/grid/crs/install/crsinstall.pm line 914.
The command '/u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/rootcrs.pl ' execution failed

Solution: Need to extend /tmp filesystem size.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[root@ocmnode1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_racprodn1-lv_root 45G 20G 23G 47% /
tmpfs 1001M 702M 30% /dev/shm
/dev/sda1 477M 55M 397M 13% /boot
/dev/sr0 56M 56M 0 100% /media/VBox_GAs_5.2.10
[root@ocmnode1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_racprodn1-lv_root 45G 20G 23G 47% / tmpfs 1001M 702M 30% /dev/shm /dev/sda1 477M 55M 397M 13% /boot /dev/sr0 56M 56M 0 100% /media/VBox_GAs_5.2.10
[root@ocmnode1 ~]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/vg_racprodn1-lv_root   45G   20G   23G  47% /
tmpfs                             1001M  702M  30% /dev/shm
/dev/sda1                         477M   55M  397M  13% /boot
/dev/sr0                           56M   56M     0 100% /media/VBox_GAs_5.2.10
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[root@ocmnode1 ~]# cat /etc/fstab
#
/dev/mapper/vg_racprodn1-lv_root / ext4 defaults 1 1
UUID=31e76520-7f50-4592-9ced-d4f03d6d95cc /boot ext4 defaults 1 2
/dev/mapper/vg_racprodn1-lv_swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
[root@ocmnode1 ~]# cat /etc/fstab # /dev/mapper/vg_racprodn1-lv_root / ext4 defaults 1 1 UUID=31e76520-7f50-4592-9ced-d4f03d6d95cc /boot ext4 defaults 1 2 /dev/mapper/vg_racprodn1-lv_swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0
[root@ocmnode1 ~]# cat /etc/fstab
#
/dev/mapper/vg_racprodn1-lv_root /                       ext4    defaults        1 1
UUID=31e76520-7f50-4592-9ced-d4f03d6d95cc /boot                   ext4    defaults        1 2
/dev/mapper/vg_racprodn1-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

Modify Below line on fstab:
tmpfs /dev/shm tmpfs size=2g 0 0

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[root@ocmnode1 ~]# cat /etc/fstab
#
/dev/mapper/vg_racprodn1-lv_root / ext4 defaults 1 1
UUID=31e76520-7f50-4592-9ced-d4f03d6d95cc /boot ext4 defaults 1 2
/dev/mapper/vg_racprodn1-lv_swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs size=2g 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
[root@ocmnode1 ~]# cat /etc/fstab # /dev/mapper/vg_racprodn1-lv_root / ext4 defaults 1 1 UUID=31e76520-7f50-4592-9ced-d4f03d6d95cc /boot ext4 defaults 1 2 /dev/mapper/vg_racprodn1-lv_swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs size=2g 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0
[root@ocmnode1 ~]# cat /etc/fstab
#
/dev/mapper/vg_racprodn1-lv_root /                       ext4    defaults        1 1
UUID=31e76520-7f50-4592-9ced-d4f03d6d95cc /boot                   ext4    defaults        1 2
/dev/mapper/vg_racprodn1-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   size=2g       0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

Reboot the system.

Run the root.sh script.

Issue # 2: One or more node names “ocmnode1.localdomain” contain one or more of the following invalid characters “.”

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 addnode]$ . oraenv
ORACLE_SID = [grid] ? +ASM2
The Oracle base has been set to /u01/app/grid
[grid@ocmnode2 addnode]$ pwd
/u01/app/12.1.0/grid/addnode
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1.localdomain} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip.localdomain} -ignoreSysPrereqs -ignorePrereq
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 16183 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4031 MB Passed
[FATAL] [INS-30131] Initial setup required for the execution of installer validations failed.
CAUSE: Failed to access the temporary location.
ACTION: Ensure that the current user has required permissions to access the temporary location.
*ADDITIONAL INFORMATION:*
Exception details
 - PRVG-11322 : One or more node names "ocmnode1.localdomain" contain one or more of the following invalid characters "."
[grid@ocmnode2 addnode]$ . oraenv ORACLE_SID = [grid] ? +ASM2 The Oracle base has been set to /u01/app/grid [grid@ocmnode2 addnode]$ pwd /u01/app/12.1.0/grid/addnode [grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1.localdomain} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip.localdomain} -ignoreSysPrereqs -ignorePrereq Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 16183 MB Passed Checking swap space: must be greater than 150 MB. Actual 4031 MB Passed [FATAL] [INS-30131] Initial setup required for the execution of installer validations failed. CAUSE: Failed to access the temporary location. ACTION: Ensure that the current user has required permissions to access the temporary location. *ADDITIONAL INFORMATION:* Exception details  - PRVG-11322 : One or more node names "ocmnode1.localdomain" contain one or more of the following invalid characters "."
[grid@ocmnode2 addnode]$ . oraenv
ORACLE_SID = [grid] ? +ASM2
The Oracle base has been set to /u01/app/grid
[grid@ocmnode2 addnode]$ pwd
/u01/app/12.1.0/grid/addnode
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1.localdomain} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip.localdomain} -ignoreSysPrereqs -ignorePrereq

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 16183 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4031 MB    Passed
[FATAL] [INS-30131] Initial setup required for the execution of installer validations failed.
   CAUSE: Failed to access the temporary location.
   ACTION: Ensure that the current user has required permissions to access the temporary location.
*ADDITIONAL INFORMATION:*
Exception details
 - PRVG-11322 : One or more node names "ocmnode1.localdomain" contain one or more of the following invalid characters "."

Solution:

Don’t use the domain name for silent mode. Correct syntax will be:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip} -ignoreSysPrereqs -ignorePrereq
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip} -ignoreSysPrereqs -ignorePrereq
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip} -ignoreSysPrereqs -ignorePrereq

Issue # 3: [FATAL] [INS-40906] Duplicate host name ocmnode1 found in the node information table for Oracle Clusterware install

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 addnode]$ pwd
/u01/app/12.1.0/grid/addnode
[grid@ocmnode2 addnode]$ ls -l addnode.sh
-rwxr-xr-x 1 grid oinstall 3575 Aug 7 22:18 addnode.sh
[grid@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}"
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 15562 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4019 MB Passed
[FATAL] [INS-40906] Duplicate host name ocmnode1 found in the node information table for Oracle Clusterware install.
CAUSE: Duplicate entries have been made in the node information table for Oracle Clusterware installation.
ACTION: Remove duplicate entries in the Oracle Clusterware node information table.
[grid@ocmnode2 addnode]$ pwd /u01/app/12.1.0/grid/addnode [grid@ocmnode2 addnode]$ ls -l addnode.sh -rwxr-xr-x 1 grid oinstall 3575 Aug 7 22:18 addnode.sh [grid@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}" Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 15562 MB Passed Checking swap space: must be greater than 150 MB. Actual 4019 MB Passed [FATAL] [INS-40906] Duplicate host name ocmnode1 found in the node information table for Oracle Clusterware install. CAUSE: Duplicate entries have been made in the node information table for Oracle Clusterware installation. ACTION: Remove duplicate entries in the Oracle Clusterware node information table.
[grid@ocmnode2 addnode]$ pwd
/u01/app/12.1.0/grid/addnode
[grid@ocmnode2 addnode]$ ls -l addnode.sh
-rwxr-xr-x 1 grid oinstall 3575 Aug  7 22:18 addnode.sh
[grid@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 15562 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4019 MB    Passed
[FATAL] [INS-40906] Duplicate host name ocmnode1 found in the node information table for Oracle Clusterware install.
   CAUSE: Duplicate entries have been made in the node information table for Oracle Clusterware installation.
   ACTION: Remove duplicate entries in the Oracle Clusterware node information table.

Cause: The problematic node information still exist on other node’s inventory file.

[grid@ocmnode2 addnode]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
...
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="ocmnode1"/>
      <NODE NAME="ocmnode2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="ocmnode1"/>
      <NODE NAME="ocmnode2"/>
   </NODE_LIST>
</HOME>
...

Need to update inventory with Good Nodes (currently working) list.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 bin]$ pwd
/u01/app/12.1.0/grid/oui/bin
[grid@ocmnode2 bin]$ export ORACLE_HOME=/u01/app/12.1.0/grid/
[grid@ocmnode2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={ocmnode2}" CRS=TRUE
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4019 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[grid@ocmnode2 bin]$ pwd /u01/app/12.1.0/grid/oui/bin [grid@ocmnode2 bin]$ export ORACLE_HOME=/u01/app/12.1.0/grid/ [grid@ocmnode2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={ocmnode2}" CRS=TRUE Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4019 MB Passed The inventory pointer is located at /etc/oraInst.loc 'UpdateNodeList' was successful.
[grid@ocmnode2 bin]$ pwd
/u01/app/12.1.0/grid/oui/bin
[grid@ocmnode2 bin]$ export ORACLE_HOME=/u01/app/12.1.0/grid/
[grid@ocmnode2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={ocmnode2}" CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4019 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[grid@ocmnode2 addnode]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml | grep NODE
   <NODE_LIST>
      <NODE NAME="ocmnode2"/>
   </NODE_LIST>
   <NODE_LIST>
      <NODE NAME="ocmnode2"/>
   </NODE_LIST>

Issue # 4:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip} -ignoreSysPrereqs -ignorePrereq
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 16182 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4029 MB Passed
Prepare Configuration in progress.
Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2020-08-08_10-28-51PM.log
SEVERE:Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured.
[FATAL] [INS-10008] Session initialization failed
CAUSE: An unexpected error occured while initializing the session.
ACTION: Contact Oracle Support Services or refer logs
SUMMARY:
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip} -ignoreSysPrereqs -ignorePrereq Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 16182 MB Passed Checking swap space: must be greater than 150 MB. Actual 4029 MB Passed Prepare Configuration in progress. Prepare Configuration successful. .................................................. 8% Done. You can find the log of this install session at: /u01/app/oraInventory/logs/addNodeActions2020-08-08_10-28-51PM.log SEVERE:Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured. [FATAL] [INS-10008] Session initialization failed CAUSE: An unexpected error occured while initializing the session. ACTION: Contact Oracle Support Services or refer logs SUMMARY:
[grid@ocmnode2 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={ocmnode1} CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip} -ignoreSysPrereqs -ignorePrereq  

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 16182 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4029 MB    Passed

Prepare Configuration in progress.

Prepare Configuration successful.
..................................................   8% Done.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/addNodeActions2020-08-08_10-28-51PM.log
SEVERE:Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured.
[FATAL] [INS-10008] Session initialization failed
   CAUSE: An unexpected error occured while initializing the session.
   ACTION: Contact Oracle Support Services or refer logs
   SUMMARY:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[grid@ocmnode2 addnode]$ more /u01/app/oraInventory/logs/addNodeActions2020-08-08_10-28-51PM.log
Status of node 'ocmnode1':
Node is okay
--------------------------------------------------------------------------
INFO: Setting variable 'REMOTE_CLEAN_MACHINES' to 'ocmnode1'. Received the value from a code block.
INFO: /u01/app/12.1.0/grid/oui/bin/../bin/lsnodes: struct size 0
INFO: Vendor clusterware is not detected.
INFO: Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured.
SEVERE: Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured.
INFO: Alert Handler not registered, using Super class functionality
INFO: Alert Handler not registered, using Super class functionality
INFO: User Selected: Yes/OK
INFO: Shutting down OUISetupDriver.JobExecutorThread
SEVERE: [FATAL] [INS-10008] Session initialization failed
CAUSE: An unexpected error occured while initializing the session.
ACTION: Contact Oracle Support Services or refer logs
SUMMARY:
- .
Refer associated stacktrace #oracle.install.commons.util.exception.DefaultErrorAdvisor:299
INFO: Advice is ABORT
SEVERE: Unconditional Exit
INFO: Adding ExitStatus FAILURE to the exit status set
INFO: Finding the most appropriate exit status for the current application
INFO: Exit Status is -1
INFO: Shutdown Oracle Grid Infrastructure 12c Release 1 Installer
INFO: Unloading Setup Driver
[grid@ocmnode2 addnode]$ more /u01/app/oraInventory/logs/addNodeActions2020-08-08_10-28-51PM.log Status of node 'ocmnode1': Node is okay -------------------------------------------------------------------------- INFO: Setting variable 'REMOTE_CLEAN_MACHINES' to 'ocmnode1'. Received the value from a code block. INFO: /u01/app/12.1.0/grid/oui/bin/../bin/lsnodes: struct size 0 INFO: Vendor clusterware is not detected. INFO: Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured. SEVERE: Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured. INFO: Alert Handler not registered, using Super class functionality INFO: Alert Handler not registered, using Super class functionality INFO: User Selected: Yes/OK INFO: Shutting down OUISetupDriver.JobExecutorThread SEVERE: [FATAL] [INS-10008] Session initialization failed CAUSE: An unexpected error occured while initializing the session. ACTION: Contact Oracle Support Services or refer logs SUMMARY: - . Refer associated stacktrace #oracle.install.commons.util.exception.DefaultErrorAdvisor:299 INFO: Advice is ABORT SEVERE: Unconditional Exit INFO: Adding ExitStatus FAILURE to the exit status set INFO: Finding the most appropriate exit status for the current application INFO: Exit Status is -1 INFO: Shutdown Oracle Grid Infrastructure 12c Release 1 Installer INFO: Unloading Setup Driver
[grid@ocmnode2 addnode]$ more /u01/app/oraInventory/logs/addNodeActions2020-08-08_10-28-51PM.log

Status of node 'ocmnode1':
Node is okay
--------------------------------------------------------------------------

INFO: Setting variable 'REMOTE_CLEAN_MACHINES' to 'ocmnode1'. Received the value from a code block.
INFO: /u01/app/12.1.0/grid/oui/bin/../bin/lsnodes: struct size 0
INFO: Vendor clusterware is not detected.
INFO: Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured.
SEVERE: Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured.
INFO: Alert Handler not registered, using Super class functionality
INFO: Alert Handler not registered, using Super class functionality
INFO: User Selected: Yes/OK

INFO: Shutting down OUISetupDriver.JobExecutorThread
SEVERE: [FATAL] [INS-10008] Session initialization failed
   CAUSE: An unexpected error occured while initializing the session.
   ACTION: Contact Oracle Support Services or refer logs
   SUMMARY:
       - .
Refer associated stacktrace #oracle.install.commons.util.exception.DefaultErrorAdvisor:299
INFO: Advice is ABORT
SEVERE: Unconditional Exit
INFO: Adding ExitStatus FAILURE to the exit status set
INFO: Finding the most appropriate exit status for the current application
INFO: Exit Status is -1
INFO: Shutdown Oracle Grid Infrastructure 12c Release 1 Installer
INFO: Unloading Setup Driver

Cause: Check on problematic Node if oracleasm is working properly. Run below command to verify.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
$ oracleasm init
# If get any error, you may try with re-installation oracle asm packages.
$ oracleasm scandisks
# If not return any records by below query, you have to fix the oracleasm.
$ oracleasm listdisks
$ oracleasm init # If get any error, you may try with re-installation oracle asm packages. $ oracleasm scandisks # If not return any records by below query, you have to fix the oracleasm. $ oracleasm listdisks
$ oracleasm init 
# If get any error, you may try with re-installation oracle asm packages. 
$ oracleasm scandisks
# If not return any records by below query, you have to fix the oracleasm. 
$ oracleasm listdisks

Solution: Fix the oracleasm issue and make sure all disks are accessible from new node.

Issue # 5: On Oracle Home.

[FATAL] [INS-30160] Installer has detected that the nodes [ocmnode1] specified for addnode operation have uncleaned inventory

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[oracle@ocmnode2 addnode]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/addnode
[oracle@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}"
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 12822 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4031 MB Passed
[FATAL] [INS-30160] Installer has detected that the nodes [ocmnode1] specified for addnode operation have uncleaned inventory.
ACTION: Ensure that the inventory location /u01/app/oraInventory is cleaned before performing addnode procedure.
[oracle@ocmnode2 addnode]$ pwd /u01/app/oracle/product/12.1.0/dbhome_1/addnode [oracle@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}" Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 12822 MB Passed Checking swap space: must be greater than 150 MB. Actual 4031 MB Passed [FATAL] [INS-30160] Installer has detected that the nodes [ocmnode1] specified for addnode operation have uncleaned inventory. ACTION: Ensure that the inventory location /u01/app/oraInventory is cleaned before performing addnode procedure.
[oracle@ocmnode2 addnode]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/addnode
[oracle@ocmnode2 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={ocmnode1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ocmnode1-vip}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 12822 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4031 MB    Passed
[FATAL] [INS-30160] Installer has detected that the nodes [ocmnode1] specified for addnode operation have uncleaned inventory.
   ACTION: Ensure that the inventory location /u01/app/oraInventory is cleaned before performing addnode procedure.

Solution: Remove or Rename oraInventory directory from New Node.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[root@ocmnode1 app]# pwd
/u01/app
[root@ocmnode1 app]# mv oraInventory oraInventory_bck
[root@ocmnode1 app]# ls -lrt
total 20
drwxr-xr-x. 3 root oinstall 4096 Aug 7 00:27 12.1.0
drwxrwxr-x 2 grid oinstall 4096 Aug 8 17:40 grid:oinstall
drwxrwx--- 5 grid oinstall 4096 Aug 9 20:53 oraInventory_bck
drwxrwxr-x. 8 grid oinstall 4096 Aug 10 01:34 grid
drwxrwxr-x. 5 oracle oinstall 4096 Aug 10 15:19 oracle
[root@ocmnode1 app]# pwd /u01/app [root@ocmnode1 app]# mv oraInventory oraInventory_bck [root@ocmnode1 app]# ls -lrt total 20 drwxr-xr-x. 3 root oinstall 4096 Aug 7 00:27 12.1.0 drwxrwxr-x 2 grid oinstall 4096 Aug 8 17:40 grid:oinstall drwxrwx--- 5 grid oinstall 4096 Aug 9 20:53 oraInventory_bck drwxrwxr-x. 8 grid oinstall 4096 Aug 10 01:34 grid drwxrwxr-x. 5 oracle oinstall 4096 Aug 10 15:19 oracle
[root@ocmnode1 app]# pwd
/u01/app
[root@ocmnode1 app]# mv oraInventory oraInventory_bck
[root@ocmnode1 app]# ls -lrt
total 20
drwxr-xr-x. 3 root   oinstall 4096 Aug  7 00:27 12.1.0
drwxrwxr-x  2 grid   oinstall 4096 Aug  8 17:40 grid:oinstall
drwxrwx---  5 grid   oinstall 4096 Aug  9 20:53 oraInventory_bck
drwxrwxr-x. 8 grid   oinstall 4096 Aug 10 01:34 grid
drwxrwxr-x. 5 oracle oinstall 4096 Aug 10 15:19 oracle