Hello Friend's,
Adding a new node to an Oracle Real Application Clusters (RAC) 19c environment allows you to expand your database infrastructure for better performance and availability. This tutorial walks you through the process step by step, covering requirements, required components, detailed steps, and best practices.
So lets get started
Prerequisites
Hardware Requirements
CPU: Ensure the new node has a compatible processor with other nodes.
Memory: Match the RAM capacity of existing nodes.
Storage: Sufficient shared storage for Oracle Clusterware and RAC database files.
Networking:
Public network interface (e.g., eth0 or ens160).
Private interconnect network interface (e.g., eth1 or ens192).
Virtual IP (VIP) address for the new node.
Software Requirements
Operating System: The same OS and patch level as existing nodes.
Oracle Software: Ensure the Oracle 19c Grid Infrastructure and Database binaries are available.
User Accounts: The grid and oracle users should exist on the new node with identical UID and GID as on existing nodes.
Pre-Checks
SSH Equivalence: Configure passwordless SSH between the new node and existing nodes.
scp file /grid/app/grid19c/gr_home/deinstall/sshUserSetup.sh from running node
and the execute on new node .
[grid@dm01db03 grid]$ ./sshUserSetup.sh -hosts "dm01db03" -user grid -noPromptPassphrase
The output of this script is also logged into /tmp/sshUserSetup_2025-01-13-15-03-06.log
Hosts are dm01db03
user is grid
Platform:- Linux
Checking if the remote hosts are reachable
PING dm01db03.database.com (192.168.140.131) 56(84) bytes of data.
64 bytes from dm01db03.database.com (192.168.140.131): icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from dm01db03.database.com (192.168.140.131): icmp_seq=2 ttl=64 time=0.080 ms
64 bytes from dm01db03.database.com (192.168.140.131): icmp_seq=3 ttl=64 time=0.070 ms
64 bytes from dm01db03.database.com (192.168.140.131): icmp_seq=4 ttl=64 time=0.100 ms
64 bytes from dm01db03.database.com (192.168.140.131): icmp_seq=5 ttl=64 time=0.251 ms
--- dm01db03.database.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4074ms
rtt min/avg/max/mdev = 0.048/0.109/0.251/0.073 ms
Remote host reachability check succeeded.
The following hosts are reachable: dm01db03.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost dm01db03
numhosts 1
The script will setup SSH connectivity from the host dm01db03.database.com to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host dm01db03.database.com
and the remote hosts without being prompted for passwords or confirmations.
NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.
NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEGES TO THESE
directories.
Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes
The user chose yes
User chose to skip passphrase related questions.
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup.
Removing old private/public keys on local host
Running SSH keygen on local host with empty passphrase
Generating public/private rsa key pair.
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:F4Jn5wQahfyd1QiBX9jHB8NnfQCKS5xCtypw8yMeoDc grid@dm01db03.database.com
The key's randomart image is:
+---[RSA 1024]----+
| .oo+.o=.*+o.|
| .o*.=..= =.*|
| o o +.Xo=+ . +.|
| . + o *.*+. |
|. E + + S o |
| . o + . . |
| . |
| |
| |
+----[SHA256]-----+
Creating .ssh directory and setting permissions on remote host dm01db03
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create ~grid/.ssh/config file on remote host dm01db03. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host dm01db03.
Warning: Permanently added 'dm01db03,192.168.140.131' (ECDSA) to the list of known hosts.
grid@dm01db03's password:
Done with creating .ssh directory and setting permissions on remote host dm01db03.
Copying local host public key to the remote host dm01db03
The user may be prompted for a password or passphrase here since the script would be using SCP for host dm01db03.
grid@dm01db03's password:
Done copying local host public key to the remote host dm01db03
cat: /home/grid/.ssh/known_hosts.tmp: No such file or directory
cat: /home/grid/.ssh/authorized_keys.tmp: No such file or directory
SSH setup is complete.
------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user grid.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~grid or ~grid/.ssh on the remote host may not be owned by grid.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the /sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--dm01db03:--
Running /usr/bin/ssh -x -l grid dm01db03 date to verify SSH connectivity has been setup from local host to dm01db03.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompt ed for a passphrase may be OK but being prompted for a password is ERROR.
Mon Jan 13 15:03:38 IST 2025
------------------------------------------------------------------------
SSH verification complete.
[grid@dm01db03 grid]$ cd
[grid@dm01db03 ~]$ cd .ssh/
[grid@dm01db03 .ssh]$ ls
authorized_keys config config.backup id_rsa id_rsa.pub known_hosts
[grid@dm01db03 .ssh]$ cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDe/nLZiT6akhvSKjqYC2nqt4+64CnivUJhaIcQUeH+r3HxIqVBMIHCLuu+ZJ7kIm/PZExCi6NokHHuE61+diM8atD2ciFZ21BGMvNKAc8GhtNUvU8JiXPcCNf0NGJ4f3wgKpTaYTAzAZ1X3vl7 fuH7FVN6KV2O+Og4/f8wFQ4Z5Q== grid@dm01db03.database.com
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDe/nLZiT6akhvSKjqYC2nqt4+64CnivUJhaIcQUeH+r3HxIqVBMIHCLuu+ZJ7kIm/PZExCi6NokHHuE61+diM8atD2ciFZ21BGMvNKAc8GhtNUvU8JiXPcCNf0NGJ4f3wgKpTaYTAzAZ1X3vl7 fuH7FVN6KV2O+Og4/f8wFQ4Z5Q== grid@dm01db03.database.com
[grid@dm01db03 .ssh]$ vi authorized_keys
[grid@dm01db03 .ssh]$ mv authorized_keys authorized_keys11
[grid@dm01db03 .ssh]$ vi authorized_keys
[grid@dm01db03 .ssh]$ ssh dm01db01
The authenticity of host 'dm01db01 (192.168.140.132)' can't be established.
ECDSA key fingerprint is SHA256:E+mgZ4n38IRmVlCFOKPsqnsZd858QdEGplAcBr6vxHE.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'dm01db01,192.168.140.132' (ECDSA) to the list of known hosts.
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Mon Jan 13 15:07:16 2025
[grid@dm01db01 ~]$ exit
logout
Connection to dm01db01 closed.
[grid@dm01db03 .ssh]$ ssh dm01db02
The authenticity of host 'dm01db02 (192.168.140.130)' can't be established.
ECDSA key fingerprint is SHA256:eZFfVO00U6po5ZwgBqI1qDHOBtoNLoyZsCqhVDsQTCU.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'dm01db02,192.168.140.130' (ECDSA) to the list of known hosts.
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Mon Jan 13 14:55:17 2025
[grid@dm01db02 ~]$
[grid@dm01db02 ~]$ exit
logout
Connection to dm01db02 closed.
Steps to Add a New Node
Step 1: Prepare the New Node
Install OS Packages:
yum install binutils -y
yum install compat-libcap1 -y
yum install compat-libstdc++-33 -y
yum install compat-libstdc++-33.i686 -y
yum install glibc -y
yum install glibc.i686 -y
yum install glibc-devel -y
yum install glibc-devel.i686 -y
yum install ksh -y
yum install libaio -y
yum install libaio.i686 -y
yum install libaio-devel -y
yum install libaio-devel.i686 -y
yum install libX11 -y
yum install libX11.i686 -y
yum install libXau -y
yum install libXau.i686 -y
yum install libXi -y
yum install libXi.i686 -y
yum install libXtst -y
yum install libXtst.i686 -y
yum install libgcc -y
yum install libgcc.i686 -y
yum install libstdc++ -y
yum install libstdc++.i686 -y
yum install libstdc++-devel -y
yum install libstdc++-devel.i686 -y
yum install libxcb -y
yum install libxcb.i686 -y
yum install make -y
yum install nfs-utils -y
yum install net-tools -y
yum install smartmontools -y
yum install sysstat -y
yum install unixODBC -y
yum install unixODBC-devel -y
yum install gcc -y
yum install gcc-c++ -y
yum install libXext -y
yum install libXext.i686 -y
yum install zlib-devel -y
yum install zlib-devel.i686 -y
yum install libnsl -y
Update /etc/hosts:Add entries for the new node's public, private, and VIP addresses.
example . Previously we have
#Public IP#######################
192.168.140.132 dm01db01.database.com dm01db01
192.168.140.130 dm01db02.database.com dm01db02
#################################
#Private IP######################
192.168.142.134 dm01db01-priv.database.com dm01db01-priv
192.168.142.135 dm01db02-priv.database.com dm01db02-priv
#################################
#VIP ############################
192.168.140.136 dm01db01-vip.database.com dm01db01-vip
192.168.140.138 dm01db02-vip.database.com dm01db02-vip
##################################
#SCAN IP##########################
192.168.140.141 dm01scan.database.com dm01scan
192.168.140.142 dm01scan.database.com dm01scan
192.168.140.144 dm01scan.database.com dm01scan
##################################
now add new extries for new node .
192.168.140.131 dm01db03.database.com dm01db03
192.168.142.137 dm01db03-priv.database.com dm01db03-priv
192.168.140.137 dm01db03-vip.database.com dm01db03-vip
Step 2: Add the Node to the Cluster
Run the addnode Command:
On an existing node, use the Grid Infrastructure addnode utility:
cluvfy comp peer -n dm01db03 -refnode dm01db02 -r 19
[grid@dm01db01 .ssh]$ cluvfy comp peer -n dm01db03 -refnode dm01db02 -r 19
This software is "384" days old. It is a best practice to update the CRS home by downloading and applying the latest release update. Refer to MOS note 756671.1 for more details.
Performing following verification checks ...
Peer Compatibility ...
Physical memory ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 5.5199GB (5788040.0KB) 5.5199GB (5788032.0KB) matched
Available memory ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 3.5374GB (3709184.0KB) 2.7347GB (2867572.0KB) matched
Swap space ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 12GB (1.2582908E7KB) 12GB (1.2582908E7KB) matched
Free Space ...PASSED
Node Name Path Mount point Status Ref. node status Comment
---------------- ------------ ------------ ------------ ------------ ------------
dm01db03 /usr /usr 1.8369GB (1926144.0KB) 1.5986GB (1676288.0KB) matched
Node Name Path Mount point Status Ref. node status Comment
---------------- ------------ ------------ ------------ ------------ ------------
dm01db03 /sbin /usr 1.8369GB (1926144.0KB) 1.5986GB (1676288.0KB) matched
Node Name Path Mount point Status Ref. node status Comment
---------------- ------------ ------------ ------------ ------------ ------------
dm01db03 /var /var 4.834GB (5068800.0KB) 4.7686GB (5000192.0KB) matched
Node Name Path Mount point Status Ref. node status Comment
---------------- ------------ ------------ ------------ ------------ ------------
dm01db03 /etc / 15.5117GB (1.6265216E7KB) 15.2969GB (1.6039936E7KB) matched
Node Name Path Mount point Status Ref. node status Comment
---------------- ------------ ------------ ------------ ------------ ------------
dm01db03 /tmp / 15.5117GB (1.6265216E7KB) 15.2969GB (1.6039936E7KB) matched
User existence for "grid" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 grid(501) grid(501) matched
Group existence for "dba" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 dba(502) dba(502) matched
Group membership for "grid" in "dba" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 yes yes matched
Run level ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 5 5 matched
System architecture ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 x86_64 x86_64 matched
Kernel version ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 5.4.17-2136.307.3.1.el8uek.x86_64 5.4.17-2136.307.3.1.el8uek.x86_64 matched
Kernel param "semmsl" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 250 250 matched
Kernel param "semmns" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 32000 32000 matched
Kernel param "semopm" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 100 100 matched
Kernel param "semmni" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 128 128 matched
Kernel param "shmmax" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 4294967296 4294967296 matched
Kernel param "shmmni" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 4096 4096 matched
Kernel param "shmall" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 0 0 matched
Kernel param "file-max" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 6815744 6815744 matched
Kernel param "ip_local_port_range" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 9000 65500 9000 65500 matched
Kernel param "rmem_default" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 262144 262144 matched
Kernel param "rmem_max" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 4194304 4194304 matched
Kernel param "wmem_default" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 262144 262144 matched
Kernel param "wmem_max" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 1048576 1048576 matched
Kernel param "aio-max-nr" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 1048576 1048576 matched
Kernel param "panic_on_oops" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 1 1 matched
Package existence for "kmod (x86_64)" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 kmod-25-19.0.1.el8 (x86_64) kmod-25-19.0.1.el8 (x86_64) matched
Package existence for "kmod-libs (x86_64)" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 kmod-libs-25-19.0.1.el8 (x86_64) kmod-libs-25-19.0.1.el8 (x86_64) matched
Package existence for "binutils" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 binutils-2.30-125.0.1.el8_10 binutils-2.30-125.0.1.el8_10 matched
Package existence for "libgcc (x86_64)" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 libgcc-8.5.0-22.0.1.el8_10 (i686),libgcc-8.5.0-22.0.1.el8_10 (x86_64) libgcc-8.5.0-22.0.1.el8_10 (i686),libgcc-8.5.0-22.0.1.el8_10 (x86_64) matched
Package existence for "libstdc++ (x86_64)" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 libstdc++-8.5.0-22.0.1.el8_10 (i686),libstdc++-8.5.0-22.0.1.el8_10 (x86_64) libstdc++-8.5.0-22.0.1.el8_10 (i686),libstdc++-8.5.0-22.0.1.el8_10 (x86_64) matched
Package existence for "sysstat" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 sysstat-11.7.3-13.0.2.el8_10 sysstat-11.7.3-13.0.2.el8_10 matched
Package existence for "ksh" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 ksh-20120801-267.0.1.el8 ksh-20120801-254.0.1.el8 matched
Package existence for "make" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 make-4.2.1-11.el8 make-4.2.1-11.el8 matched
Package existence for "glibc (x86_64)" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 glibc-2.28-251.0.2.el8_10.5 (i686),glibc-2.28-251.0.2.el8_10.5 (x86_64) glibc-2.28-251.0.2.el8_10.5 (i686),glibc-2.28-251.0.2.el8_10.5 (x86_64) matched
Package existence for "glibc-devel (x86_64)" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 glibc-devel-2.28-251.0.2.el8_10.5 (x86_64) glibc-devel-2.28-251.0.2.el8_10.5 (x86_64) matched
Package existence for "libaio (x86_64)" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 libaio-0.3.112-1.el8 (x86_64) libaio-0.3.112-1.el8 (x86_64) matched
Package existence for "nfs-utils" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 nfs-utils-2.3.3-59.0.1.el8 nfs-utils-2.3.3-59.0.1.el8 matched
Package existence for "smartmontools" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 smartmontools-7.1-3.el8 smartmontools-7.1-3.el8 matched
Package existence for "net-tools" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 net-tools-2.0-0.52.20160912git.el8 net-tools-2.0-0.52.20160912git.el8 matched
Package existence for "policycoreutils" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 policycoreutils-2.9-19.0.1.el8 policycoreutils-2.9-19.0.1.el8 matched
Package existence for "policycoreutils-python-utils" ...PASSED
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
dm01db03 policycoreutils-python-utils-2.9-19.0.1.el8 policycoreutils-python-utils-2.9-19.0.1.el8 matched
Peer Compatibility ...PASSED
Verification of peer compatibility was successful.
CVU operation performed: peer compatibility
Date: Jan 13, 2025 3:24:04 PM
CVU version: 19.22.0.0.0 (122623x8664)
Clusterware version: 19.0.0.0.0
CVU home: /grid/app/grid19c/gr_home
Grid home: /grid/app/grid19c/gr_home
User: grid
Operating system: Linux5.4.17-2136.307.3.1.el8uek.x86_64
cluvfy stage -pre crsinst -n dm01db03 -fixup
[grid@dm01db01 ~]$ cluvfy stage -pre crsinst -n dm01db03 -fixup
This software is "384" days old. It is a best practice to update the CRS home by downloading and applying the latest release update. Refer to MOS note 756671.1 for more details.
Performing following verification checks ...
Physical Memory ...FAILED (PRVF-7530)
Available Physical Memory ...PASSED
Swap Size ...PASSED
Free Space: dm01db03:/usr,dm01db03:/sbin ...PASSED
Free Space: dm01db03:/var ...PASSED
Free Space: dm01db03:/etc,dm01db03:/tmp ...PASSED
Free Space: dm01db03:/grid/app/grid19c/gr_home ...PASSED
User Existence: grid ...
Users With Same UID: 501 ...PASSED
User Existence: grid ...PASSED
Group Existence: asmadmin ...PASSED
Group Existence: asmdba ...PASSED
Group Existence: oinstall ...PASSED
Group Membership: asmdba ...PASSED
Group Membership: asmadmin ...PASSED
Group Membership: oinstall(Primary) ...PASSED
Run Level ...PASSED
Hard Limit: maximum open file descriptors ...PASSED
Soft Limit: maximum open file descriptors ...PASSED
Hard Limit: maximum user processes ...PASSED
Soft Limit: maximum user processes ...PASSED
Soft Limit: maximum stack size ...PASSED
Architecture ...PASSED
OS Kernel Version ...PASSED
OS Kernel Parameter: semmsl ...PASSED
OS Kernel Parameter: semmns ...PASSED
OS Kernel Parameter: semopm ...PASSED
OS Kernel Parameter: semmni ...PASSED
OS Kernel Parameter: shmmax ...PASSED
OS Kernel Parameter: shmmni ...PASSED
OS Kernel Parameter: shmall ...PASSED
OS Kernel Parameter: file-max ...PASSED
OS Kernel Parameter: ip_local_port_range ...PASSED
OS Kernel Parameter: rmem_default ...PASSED
OS Kernel Parameter: rmem_max ...PASSED
OS Kernel Parameter: wmem_default ...PASSED
OS Kernel Parameter: wmem_max ...PASSED
OS Kernel Parameter: aio-max-nr ...PASSED
OS Kernel Parameter: panic_on_oops ...PASSED
Package: kmod-20-21 (x86_64) ...PASSED
Package: kmod-libs-20-21 (x86_64) ...PASSED
Package: binutils-2.30-49.0.2 ...PASSED
Package: libgcc-8.2.1 (x86_64) ...PASSED
Package: libstdc++-8.2.1 (x86_64) ...PASSED
Package: sysstat-10.1.5 ...PASSED
Package: ksh ...PASSED
Package: make-4.2.1 ...PASSED
Package: glibc-2.28 (x86_64) ...PASSED
Package: glibc-devel-2.28 (x86_64) ...PASSED
Package: libaio-0.3.110 (x86_64) ...PASSED
Package: nfs-utils-2.3.3-14 ...PASSED
Package: smartmontools-6.6-3 ...PASSED
Package: net-tools-2.0-0.51 ...PASSED
Package: policycoreutils-2.9-3 ...PASSED
Package: policycoreutils-python-utils-2.9-3 ...PASSED
Users With Same UID: 0 ...PASSED
Current Group ID ...PASSED
Root user consistency ...PASSED
Package: cvuqdisk-1.0.10-1 ...PASSED
Package: psmisc-22.6-19 ...PASSED
Host name ...PASSED
Node Connectivity ...
Hosts File ...PASSED
Check that maximum (MTU) size packet goes through subnet ...PASSED
Node Connectivity ...PASSED
Multicast or broadcast check ...PASSED
Network Time Protocol (NTP) ...PASSED
Same core file name pattern ...PASSED
User Mask ...PASSED
User Not In Group "root": grid ...PASSED
Time zone consistency ...PASSED
Path existence, ownership, permissions and attributes ...
Path "/var" ...PASSED
Path "/var/tmp/.oracle" ...PASSED
Path "/dev/shm" ...PASSED
Path "/grid/app/grid19c/gr_base/diag/crs/dm01db01/crs/trace" ...PASSED
Path "/grid/app/grid19c/gr_base/diag/crs/dm01db01/crs/lck" ...PASSED
Path "/grid/app/grid19c/gr_base/diag/crs/dm01db01/crs/log" ...PASSED
Path "/grid/app/grid19c/gr_base/diag/crs/dm01db01/crs/sweep" ...PASSED
Path "/grid/app/grid19c/gr_base/diag/crs/dm01db01/crs/metadata" ...PASSED
Path "/grid/app/grid19c/gr_base/diag/crs/dm01db01/crs/metadata_dgif" ...PASSED
Path "/grid/app/grid19c/gr_base/diag/crs/dm01db01/crs/metadata_pv" ...PASSED
Path "/grid/app/grid19c/gr_base/diag/crs/dm01db01/crs/alert" ...PASSED
Path "/grid/app/grid19c/gr_base/diag/crs/dm01db01/crs/incpkg" ...PASSED
Path "/grid/app/grid19c/gr_base/diag/crs/dm01db01/crs/stage" ...PASSED
Path "/grid/app/grid19c/gr_base/diag/crs/dm01db01/crs/incident" ...PASSED
Path "/grid/app/grid19c/gr_base/diag/crs/dm01db01/crs/cdump" ...PASSED
Path existence, ownership, permissions and attributes ...PASSED
DNS/NIS name service ...PASSED
Domain Sockets ...PASSED
File system mount options for path GI_HOME ...PASSED
Daemon "avahi-daemon" not configured and running ...PASSED
Daemon "proxyt" not configured and running ...PASSED
loopback network interface address ...PASSED
Grid Infrastructure home path: /grid/app/grid19c/gr_home ...
'/grid/app/grid19c/gr_home' ...PASSED
Grid Infrastructure home path: /grid/app/grid19c/gr_home ...PASSED
User Equivalence ...PASSED
RPM Package Manager database ...INFORMATION (PRVG-11250)
Network interface bonding status of private interconnect network interfaces ...PASSED
/dev/shm mounted as temporary file system ...PASSED
File system mount options for path /var ...PASSED
DefaultTasksMax parameter ...PASSED
ASM Filter Driver configuration ...PASSED
Systemd login manager IPC parameter ...PASSED
ORAchk checks ...PASSED
Pre-check for cluster services setup was unsuccessful on all the nodes.
Failures were encountered during execution of CVU verification request "stage -pre crsinst".
Physical Memory ...FAILED
dm01db03: PRVF-7530 : Sufficient physical memory is not available on node
"dm01db03" [Required physical memory = 8GB (8388608.0KB)]
RPM Package Manager database ...INFORMATION
PRVG-11250 : The check "RPM Package Manager database" was not performed because
it needs 'root' user privileges.
Refer to My Oracle Support notes "2548970.1" for more details regarding errors
PRVG-11250".
CVU operation performed: stage -pre crsinst
Date: Jan 13, 2025 3:49:03 PM
CVU version: 19.22.0.0.0 (122623x8664)
Clusterware version: 19.0.0.0.0
CVU home: /grid/app/grid19c/gr_home
Grid home: /grid/app/grid19c/gr_home
User: grid
Operating system: Linux5.4.17-2136.307.3.1.el8uek.x86_64
[grid@dm01db01 ~]$
cluvfy stage -pre nodeadd -n dm01db03
[grid@dm01db01 ~]$ cluvfy stage -pre nodeadd -n dm01db03
This software is "384" days old. It is a best practice to update the CRS home by downloading and applying the latest release update. Refer to MOS note 756671.1 for more details.
Performing following verification checks ...
Physical Memory ...FAILED (PRVF-7530)
Available Physical Memory ...PASSED
Swap Size ...PASSED
Free Space: dm01db03:/usr,dm01db03:/sbin ...PASSED
Free Space: dm01db03:/var ...PASSED
Free Space: dm01db03:/etc,dm01db03:/tmp ...PASSED
Free Space: dm01db03:/grid/app/grid19c/gr_home ...PASSED
Free Space: dm01db01:/usr,dm01db01:/sbin ...PASSED
Free Space: dm01db01:/var ...PASSED
Free Space: dm01db01:/etc,dm01db01:/tmp ...PASSED
Free Space: dm01db01:/grid/app/grid19c/gr_home ...PASSED
User Existence: oracle ...
Users With Same UID: 502 ...PASSED
User Existence: oracle ...PASSED
User Existence: grid ...
Users With Same UID: 501 ...PASSED
User Existence: grid ...PASSED
User Existence: root ...
Users With Same UID: 0 ...PASSED
User Existence: root ...PASSED
Group Existence: asmadmin ...PASSED
Group Existence: asmoper ...PASSED
Group Existence: asmdba ...PASSED
Group Existence: oinstall ...PASSED
Group Membership: oinstall ...PASSED
Group Membership: asmdba ...PASSED
Group Membership: asmadmin ...PASSED
Group Membership: asmoper ...PASSED
Run Level ...PASSED
Hard Limit: maximum open file descriptors ...PASSED
Soft Limit: maximum open file descriptors ...PASSED
Hard Limit: maximum user processes ...PASSED
Soft Limit: maximum user processes ...PASSED
Soft Limit: maximum stack size ...PASSED
Architecture ...PASSED
OS Kernel Version ...PASSED
OS Kernel Parameter: semmsl ...PASSED
OS Kernel Parameter: semmns ...PASSED
OS Kernel Parameter: semopm ...PASSED
OS Kernel Parameter: semmni ...PASSED
OS Kernel Parameter: shmmax ...PASSED
OS Kernel Parameter: shmmni ...PASSED
OS Kernel Parameter: shmall ...PASSED
OS Kernel Parameter: file-max ...PASSED
OS Kernel Parameter: ip_local_port_range ...PASSED
OS Kernel Parameter: rmem_default ...PASSED
OS Kernel Parameter: rmem_max ...PASSED
OS Kernel Parameter: wmem_default ...PASSED
OS Kernel Parameter: wmem_max ...PASSED
OS Kernel Parameter: aio-max-nr ...PASSED
OS Kernel Parameter: panic_on_oops ...PASSED
Package: kmod-20-21 (x86_64) ...PASSED
Package: kmod-libs-20-21 (x86_64) ...PASSED
Package: binutils-2.30-49.0.2 ...PASSED
Package: libgcc-8.2.1 (x86_64) ...PASSED
Package: libstdc++-8.2.1 (x86_64) ...PASSED
Package: sysstat-10.1.5 ...PASSED
Package: ksh ...PASSED
Package: make-4.2.1 ...PASSED
Package: glibc-2.28 (x86_64) ...PASSED
Package: glibc-devel-2.28 (x86_64) ...PASSED
Package: libaio-0.3.110 (x86_64) ...PASSED
Package: nfs-utils-2.3.3-14 ...PASSED
Package: smartmontools-6.6-3 ...PASSED
Package: net-tools-2.0-0.51 ...PASSED
Package: policycoreutils-2.9-3 ...PASSED
Package: policycoreutils-python-utils-2.9-3 ...PASSED
Users With Same UID: 0 ...PASSED
Current Group ID ...PASSED
Root user consistency ...PASSED
Package: cvuqdisk-1.0.10-1 ...PASSED
Node Addition ...
CRS Integrity ...PASSED
Clusterware Version Consistency ...PASSED
'/grid/app/grid19c/gr_home' ...PASSED
Node Addition ...PASSED
Host name ...PASSED
Node Connectivity ...
Hosts File ...PASSED
Check that maximum (MTU) size packet goes through subnet ...PASSED
subnet mask consistency for subnet "192.168.142.0" ...PASSED
subnet mask consistency for subnet "192.168.140.0" ...PASSED
Node Connectivity ...PASSED
Multicast or broadcast check ...PASSED
ASM Integrity ...PASSED
Device Checks for ASM ...PASSED
Database home availability ...PASSED
OCR Integrity ...PASSED
Time zone consistency ...PASSED
Network Time Protocol (NTP) ...PASSED
User Not In Group "root": grid ...PASSED
Time offset between nodes ...PASSED
resolv.conf Integrity ...PASSED
DNS/NIS name service ...PASSED
User Equivalence ...PASSED
Software home: /grid/app/grid19c/gr_home ...PASSED
/dev/shm mounted as temporary file system ...PASSED
Pre-check for node addition was unsuccessful on all the nodes.
Failures were encountered during execution of CVU verification request "stage -pre nodeadd".
Physical Memory ...FAILED
dm01db03: PRVF-7530 : Sufficient physical memory is not available on node
"dm01db03" [Required physical memory = 8GB (8388608.0KB)]
dm01db01: PRVF-7530 : Sufficient physical memory is not available on node
"dm01db01" [Required physical memory = 8GB (8388608.0KB)]
CVU operation performed: stage -pre nodeadd
Date: Jan 13, 2025 3:51:09 PM
CVU version: 19.22.0.0.0 (122623x8664)
Clusterware version: 19.0.0.0.0
CVU home: /grid/app/grid19c/gr_home
Grid home: /grid/app/grid19c/gr_home
User: grid
Operating system: Linux5.4.17-2136.307.3.1.el8uek.x86_64
Now add node for Grid :
./addnode.sh -silent CLUSTER_NEW_NODES={dm01db03} CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm01db03-vip} -ignoreSysPrereqs -ignorePrereq
[grid@dm01db01 ~]$ cd $ORACLE_HOME
[grid@dm01db01 gr_home]$ cd addnode/
[grid@dm01db01 addnode]$ pwd
/grid/app/grid19c/gr_home/addnode
[grid@dm01db01 addnode]$ ls -ltr
total 16
-rw-r-----. 1 grid oinstall 1899 Aug 24 2017 addnode.pl
-rw-r-----. 1 grid oinstall 2106 Apr 17 2019 addnode_oraparam.ini.sbs
-rw-r--r--. 1 grid oinstall 2098 Apr 17 2019 addnode_oraparam.ini
-rwxr-x---. 1 grid oinstall 1737 Jan 8 20:46 addnode.sh
[grid@dm01db01 addnode]$ ./addnode.sh -silent CLUSTER_NEW_NODES={dm01db03} CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm01db03-vip} -ignoreSysPrereqs -ignorePrereq
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
CAUSE: Some of the optional prerequisites are not met. See logs for details. /grid/app/grid19c/oraInventory/logs/addNodeActions2025-01-13_04-07-38PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /grid/app/grid19c/oraInventory/logs/addNodeActions2025-01-13_04-07-38PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
Copy Files to Remote Nodes in progress.
.................................................. 6% Done.
.................................................. 11% Done.
....................
Copy Files to Remote Nodes successful.
Prepare Configuration in progress.
Prepare Configuration successful.
.................................................. 21% Done.
You can find the log of this install session at:
/grid/app/grid19c/oraInventory/logs/addNodeActions2025-01-13_04-07-38PM.log
Instantiate files in progress.
Instantiate files successful.
.................................................. 49% Done.
Saving cluster inventory in progress.
.................................................. 83% Done.
Saving cluster inventory successful.
The Cluster Node Addition of /grid/app/grid19c/gr_home was successful.
Please check '/grid/app/grid19c/oraInventory/logs/silentInstall2025-01-13_04-07-38PM.log' for more details.
Setup Oracle Base in progress.
Setup Oracle Base successful.
.................................................. 90% Done.
Update Inventory in progress.
You can find the log of this install session at:
/grid/app/grid19c/oraInventory/logs/addNodeActions2025-01-13_04-07-38PM.log
Update Inventory successful.
.................................................. 97% Done.
As a root user, execute the following script(s):
1. /grid/app/grid19c/oraInventory/orainstRoot.sh
2. /grid/app/grid19c/gr_home/root.sh
Execute /grid/app/grid19c/oraInventory/orainstRoot.sh on the following nodes:
[dm01db03]
Execute /grid/app/grid19c/gr_home/root.sh on the following nodes:
[dm01db03]
The scripts can be executed in parallel on all the nodes.
Successfully Setup Software with warning(s).
.................................................. 100% Done.
[grid@dm01db01 addnode]$
Run Orainst and root.sh .
[root@dm01db03 oraInventory]# ./orainstRoot.sh
Changing permissions of /grid/app/grid19c/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /grid/app/grid19c/oraInventory to oinstall.
The execution of the script is complete.
[root@dm01db03 oraInventory]#
[root@dm01db03 oraInventory]#
[root@dm01db03 oraInventory]# cd ..
[root@dm01db03 grid19c]# cd gr_home/
[root@dm01db03 gr_home]# ./root
root.sh root.sh.old root.sh.old.2 rootupgrade.sh
[root@dm01db03 gr_home]# ./root
root.sh root.sh.old root.sh.old.2 rootupgrade.sh
[root@dm01db03 gr_home]# ./root.sh
Check /grid/app/grid19c/gr_home/install/root_dm01db03.database.com_2025-01-13_16-12-39-340231171.log for the output of root script
[root@dm01db03 ~]# tail -200f /grid/app/grid19c/gr_home/install/root_dm01db03.database.com_2025-01-13_16-12-39-340231171.log
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /grid/app/grid19c/gr_home
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /grid/app/grid19c/gr_home/crs/install/crsconfig_params
The log of current session can be found at:
/grid/app/grid19c/gr_base/crsdata/dm01db03/crsconfig/rootcrs_dm01db03_2025-01-13_04-13-11PM.log
2025/01/13 16:13:13 CLSRSC-594: Executing installation step 1 of 19: 'ValidateEnv'.
2025/01/13 16:13:13 CLSRSC-594: Executing installation step 2 of 19: 'CheckFirstNode'.
2025/01/13 16:13:13 CLSRSC-594: Executing installation step 3 of 19: 'GenSiteGUIDs'.
2025/01/13 16:13:13 CLSRSC-594: Executing installation step 4 of 19: 'SetupOSD'.
Redirecting to /bin/systemctl restart rsyslog.service
2025/01/13 16:13:14 CLSRSC-594: Executing installation step 5 of 19: 'CheckCRSConfig'.
2025/01/13 16:13:15 CLSRSC-594: Executing installation step 6 of 19: 'SetupLocalGPNP'.
2025/01/13 16:13:15 CLSRSC-594: Executing installation step 7 of 19: 'CreateRootCert'.
2025/01/13 16:13:15 CLSRSC-594: Executing installation step 8 of 19: 'ConfigOLR'.
2025/01/13 16:13:19 CLSRSC-594: Executing installation step 9 of 19: 'ConfigCHMOS'.
2025/01/13 16:13:19 CLSRSC-594: Executing installation step 10 of 19: 'CreateOHASD'.
2025/01/13 16:13:20 CLSRSC-594: Executing installation step 11 of 19: 'ConfigOHASD'.
2025/01/13 16:13:23 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2025/01/13 16:13:40 CLSRSC-594: Executing installation step 12 of 19: 'SetupTFA'.
2025/01/13 16:13:40 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2025/01/13 16:13:58 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2025/01/13 16:14:17 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2025/01/13 16:14:18 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2025/01/13 16:14:24 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2025/01/13 16:15:47 CLSRSC-343: Successfully started Oracle Clusterware stack
2025/01/13 16:15:47 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
clscfg: EXISTING configuration version 19 detected.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2025/01/13 16:16:00 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2025/01/13 16:16:08 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Domain name to open is ASM/Self drc 4
check cluster and CRS
[root@dm01db03 gr_home]# crsctl check cluster -all
**************************************************************
dm01db01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
dm01db02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
dm01db03:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@dm01db03 gr_home]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE dm01db03 STABLE
ora.chad
ONLINE ONLINE dm01db03 STABLE
ora.net1.network
ONLINE ONLINE dm01db03 STABLE
ora.ons
ONLINE ONLINE dm01db03 STABLE
ora.proxy_advm
OFFLINE OFFLINE dm01db03 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE dm01db03 STABLE
2 ONLINE OFFLINE STABLE
3 OFFLINE OFFLINE STABLE
ora.CRS1.dg(ora.asmgroup)
1 ONLINE ONLINE dm01db03 STABLE
2 OFFLINE OFFLINE STABLE
3 ONLINE OFFLINE STABLE
ora.DATA.dg(ora.asmgroup)
1 ONLINE ONLINE dm01db03 STABLE
2 OFFLINE OFFLINE STABLE
3 ONLINE OFFLINE STABLE
ora.LISTENER_SCAN1.lsnr
1 ONLINE OFFLINE STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE OFFLINE STABLE
ora.RECO.dg(ora.asmgroup)
1 ONLINE ONLINE dm01db03 STABLE
2 OFFLINE OFFLINE STABLE
3 ONLINE OFFLINE STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE dm01db03 Started,STABLE
2 ONLINE OFFLINE STABLE
3 OFFLINE OFFLINE STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE dm01db03 STABLE
2 ONLINE OFFLINE STABLE
3 ONLINE OFFLINE STABLE
ora.cdbtst.db
1 ONLINE OFFLINE STABLE
2 ONLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE dm01db03 STABLE
ora.dm01db01.vip
1 ONLINE OFFLINE STABLE
ora.dm01db02.vip
1 ONLINE OFFLINE STABLE
ora.dm01db03.vip
1 ONLINE ONLINE dm01db03 STABLE
ora.qosmserver
1 ONLINE ONLINE dm01db03 STABLE
ora.scan1.vip
1 ONLINE OFFLINE STABLE
ora.scan2.vip
1 ONLINE OFFLINE STABLE
--------------------------------------------------------------------------------
[root@dm01db03 gr_home]#
Extend the Oracle RAC Database
./addnode.sh -silent "CLUSTER_NEW_NODES={dm01db03}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm01db03-vip}" -ignoreSysPrereqs -ignorePrereq
[oracle@dm01db01 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={dm01db03}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={dm01db03-vip}" -ignoreSysPrereqs -ignorePrereq
[WARNING] [INS-13013] Target environment does not meet some mandatory requirements.
CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /grid/app/grid19c/oraInventory/logs/addNodeActions2025-01-13_04-33-40PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /grid/app/grid19c/oraInventory/logs/addNodeActions2025-01-13_04-33-40PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
Prepare Configuration in progress.
Prepare Configuration successful.
.................................................. 7% Done.
Copy Files to Remote Nodes in progress.
.................................................. 12% Done.
.................................................. 18% Done.
..............................
Copy Files to Remote Nodes successful.
You can find the log of this install session at:
/grid/app/grid19c/oraInventory/logs/addNodeActions2025-01-13_04-33-40PM.log
Instantiate files in progress.
Instantiate files successful.
.................................................. 52% Done.
Saving cluster inventory in progress.
.................................................. 89% Done.
Saving cluster inventory successful.
The Cluster Node Addition of /oracle/app/orawork/product/19.0.0/db_1 was successful.
Please check '/grid/app/grid19c/oraInventory/logs/silentInstall2025-01-13_04-33-40PM.log' for more details.
Setup Oracle Base in progress.
Setup Oracle Base successful.
.................................................. 96% Done.
As a root user, execute the following script(s):
1. /oracle/app/orawork/product/19.0.0/db_1/root.sh
Execute /oracle/app/orawork/product/19.0.0/db_1/root.sh on the following nodes:
[dm01db03]
Successfully Setup Software with warning(s).
.................................................. 100% Done.
[oracle@dm01db01 addnode]$
connect to already running instance .
[oracle@dm01db01 addnode]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Mon Jan 13 16:46:20 2025
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> show parameter unique
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string CDBTST
SQL> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> alter system set undo_tablespace=UNDOTBS1 sid='CDBTST3' scope=spfile;
System altered.
SQL> alter system set instance_number=3 sid='CDBTST3' scope=spfile;
System altered.
SQL> alter system set instance_name='CDBTST3' sid='CDBTST3' scope=spfile;
System altered.
SQL> alter system set thread=3 sid='CDBTST3' scope=spfile;
System altered.
SQL> /
GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ --------- ----------
1 1 13 209715200 512 2 YES INACTIVE 2746596 11-JAN-25 2929335 13-JAN-25 0
2 1 14 209715200 512 2 NO CURRENT 2929335 13-JAN-25 1.8447E+19 0
3 2 7 209715200 512 2 YES INACTIVE 2639392 09-JAN-25 2639519 09-JAN-25 0
4 2 6 209715200 512 2 YES INACTIVE 2639390 09-JAN-25 2639392 09-JAN-25 0
alter database add logfile thread 3 group 5 ('+DATA','+RECO') size 200m;
alter database add logfile thread 3 group 6 ('+DATA','+RECO') size 200m;
Now Run DBCA to add instance .
dbca -silent -addInstance -nodeName newnode1 -instanceName newnode1_instance
Add instance
srvctl add instance -db CDBTST -instance CDBTST3 -node dm01db03
Hope this helps
Regards
Sultan Khan
ConversionConversion EmoticonEmoticon