GRID Installation
Oracle 19c Grid Infrastructure Installation on RHEL 8.10
GRID Installation
Configure .bash_profile
on Node1
and Node2
for Grid User.
Configure .bash_profile
on Node1
and Node2
for grid
User.
nano /home/grid/.bash_profile
export CV_ASSUME_DISTID=OL8
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=MUMDCNODE1.HOMELAB.COM
export ORACLE_BASE=/u01/app/grid/gridbase/
export ORACLE_HOME=/u01/app/grid/19.3.0/gridhome_1
export GRID_BASE=/u01/app/grid/gridbase/
export GRID_HOME=/u01/app/grid/19.3.0/gridhome_1
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH:$ORACLE_HOME/OPatch
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export CV_ASSUME_DISTID=OL8
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=MUMDCNODE2.HOMELAB.COM
export ORACLE_BASE=/u01/app/grid/gridbase/
export ORACLE_HOME=/u01/app/grid/19.3.0/gridhome_1
export GRID_BASE=/u01/app/grid/gridbase/
export GRID_HOME=/u01/app/grid/19.3.0/gridhome_1
export ORACLE_SID=+ASM2
export ORACLE_TERM=xterm
export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH:$ORACLE_HOME/OPatch
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
Configure .bash_profile
on Node1
and Node2
for Oracle User.
Configure .bash_profile
on Node1
and Node2
for oracle
User.
nano /home/oracle/.bash_profile
export CV_ASSUME_DISTID=OL8
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=MUMDCNODE1.ORACLE.COM
export ORACLE_UNQNAME=ORAC19C
export ORACLE_BASE=/u01/app/oracle/database/19.3.0/
export DB_HOME=$ORACLE_BASE/dbhome_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=ORAC19C1
export ORACLE_TERM=xterm
export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export CV_ASSUME_DISTID=OL8
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=MUMDCNODE2.HOMELAB.COM
export ORACLE_UNQNAME=ORAC19C
export ORACLE_BASE=/u01/app/oracle/database/19.3.0/
export DB_HOME=$ORACLE_BASE/dbhome_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=ORAC19C2
export ORACLE_TERM=xterm
export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
Important
The following steps have to be performed only on Node 1
.
Do not execute any commands on Node 2
.
Configure Passwordless SSH Setup
Login with grid
user and execute the following steps to configure Passwordless SSH setup.
cd $GRID_HOME/deinstall/
./sshUserSetup.sh -user oracle -hosts "MUMDCNODE1 MUMDCNODE2" -noPromptPassphrase -confirm -advanced
./sshUserSetup.sh -user grid -hosts "MUMDCNODE1 MUMDCNODE2" -noPromptPassphrase -confirm -advanced
Pre-check for RAC Setup
We use this Cluvfy command to check that our cluster is ready for the Grid install.
export CV_ASSUME_DISTID=OL8
$GRID_HOME/runcluvfy.sh stage -pre crsinst -n MUMDCNODE1,MUMDCNODE2 -verbose
$GRID_HOME/runcluvfy.sh comp clocksync -n all -verbose
If you encounter the below error during the Cluster Verificaion, then reboot both Nodes and perform the Cluster Verfication once again.
ERROR:
PRVG-10467 : The default Oracle Inventory group could not be determined.
Bug
CVU may report package "compat-libcap1-1.10" as a requirement in OL8 and OL9 which is incorrect. This is only needed in OL7 .
Install GRID Infrastructure
Login with grid
user and execute the following command.
Apply patch 6880880 to the Oracle GI home to upgrade the Opatch utility
$GRID_HOME/gridSetup.sh -applyRU /u01/patch/36582629
The Grid Infrastructure GUI Installer will begin.
Step 1
- Select "Configure Oracle Grid Infrastructure for a New Cluster"

Step 2
- Select "Configure an Oracle Standalone Cluster"

Step 3
- Give your Cluster a Name. In our case we will name it:
MUMBAI-CLUSTER
- SCAN Name:
RAC-SCAN
- SCAN Port:
1521

Step 4
-
Click on
Add
and Add the Second Node to the list..- Public Hostname:
MUMDCNODE2.HOMELAB.COM
. - Virtual Hostname:
MUMDCNODE2-vip.HOMELAB.COM
- Virtual Hostname:
MUMDCNODE2-vip.HOMELAB.COM
- Click Next to Proceed onto the next step.
- Public Hostname:

Step 5
- Select ASM & Private for interface ens192 as it will be used for the ASM Disk Drives and the Private Subnet.

Step 6
- Select Use Oracle Flex ASM for the Storage.

Step 7
- Select No for the Configure Grid Ingfrastructure Management Repository and Click Next.

Step 8
- Give the Disk Group a name, In our case we named it OCR.
- Select External for the ASM Disks Redundancy.
- Change the ASM Disk Discovery Path.

ASM Disk Discovery Path
Change the ASM Disk Discovery Path to the following:
/dev/oracleasm/disks*
Step 9
- Select Use same passwords for these accounts and specify a Password and proceed to the Next step.

Step 10
- Select Do not use Intellident Platform Management Interface (IPMI) and proceed to the Next step.

Step 11
- Leave the Register with Enterprise Manager (EM) Cloud Control unchecked and proceed to the Next step.

Step 12
- select asmadmin for Oracle ASM Administrator (OSASM) Group.
- Select asmdba for Oracle ASM DBA(OSDBA for ASM) Group.
- Select asmoper for Oracle ASM Operator (OSOPER for ASM) Group.

Step 13
- Cross check the Oracle Base Directory and proceed to the next step.

Step 14
- Cross check the Oracle Inventory Directory and proceed to the next step.

Step 15
- Leave the Automatically run Configuration scripts unchecked as we want full control of the Installation Process. We will execute the scripts manually as
root
user.

Step 16.1
- Prerequsite Checks will be performed.

Step 16.2
- Check the Ignore All box and click Next to Proceed to the next step.

Step 17
- Verify the Configuration, Save the response file if required for future reference and proceed to the Next step.

Step 18.1
- The Installation of Grid Infrastructure will start.

Step 18.2
- When the Execute Configuration Scripts prompt appears, Open a new Terminal window as
root
user and execute both the scripts one at a time onNode1
. - After the scripts have been executed on
Node1
successfully. Execute it onNode2
. - Once both the scripts have been executed successfully, click on OK.

CRS Status
You can simultaneously check the status of the Cluster in a new tab with grid
user.
watch crsctl stat res -t -init
watch crsctl status resource -t
CRS LOG Location
$GRID_BASE/diag/crs/dcnode1/crs/trace/alert.log