Skip to content

GRID Installation

Oracle 19c Grid Infrastructure Installation on RHEL 8.10

GRID Installation

Configure .bash_profile on Node1 and Node2 for Grid User.

Configure .bash_profile on Node1 and Node2 for grid User.

nano /home/grid/.bash_profile
export CV_ASSUME_DISTID=OL8
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=MUMDCNODE1.HOMELAB.COM
export ORACLE_BASE=/u01/app/grid/gridbase/
export ORACLE_HOME=/u01/app/grid/19.3.0/gridhome_1
export GRID_BASE=/u01/app/grid/gridbase/
export GRID_HOME=/u01/app/grid/19.3.0/gridhome_1
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH:$ORACLE_HOME/OPatch
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export CV_ASSUME_DISTID=OL8
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=MUMDCNODE2.HOMELAB.COM
export ORACLE_BASE=/u01/app/grid/gridbase/
export ORACLE_HOME=/u01/app/grid/19.3.0/gridhome_1
export GRID_BASE=/u01/app/grid/gridbase/
export GRID_HOME=/u01/app/grid/19.3.0/gridhome_1
export ORACLE_SID=+ASM2
export ORACLE_TERM=xterm
export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH:$ORACLE_HOME/OPatch
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

Configure .bash_profile on Node1 and Node2 for Oracle User.

Configure .bash_profile on Node1 and Node2 for oracle User.

nano /home/oracle/.bash_profile
export CV_ASSUME_DISTID=OL8
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=MUMDCNODE1.ORACLE.COM
export ORACLE_UNQNAME=ORAC19C
export ORACLE_BASE=/u01/app/oracle/database/19.3.0/
export DB_HOME=$ORACLE_BASE/dbhome_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=ORAC19C1
export ORACLE_TERM=xterm
export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export CV_ASSUME_DISTID=OL8
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=MUMDCNODE2.HOMELAB.COM
export ORACLE_UNQNAME=ORAC19C
export ORACLE_BASE=/u01/app/oracle/database/19.3.0/
export DB_HOME=$ORACLE_BASE/dbhome_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=ORAC19C2
export ORACLE_TERM=xterm
export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

Important

The following steps have to be performed only on Node 1.

Do not execute any commands on Node 2.

Configure Passwordless SSH Setup

Login with grid user and execute the following steps to configure Passwordless SSH setup.

cd $GRID_HOME/deinstall/
For Oracle User
./sshUserSetup.sh -user oracle -hosts "MUMDCNODE1 MUMDCNODE2" -noPromptPassphrase -confirm -advanced
For Grid User
./sshUserSetup.sh -user grid -hosts "MUMDCNODE1 MUMDCNODE2" -noPromptPassphrase -confirm -advanced

Pre-check for RAC Setup

We use this Cluvfy command to check that our cluster is ready for the Grid install.

export CV_ASSUME_DISTID=OL8
As Grid User
$GRID_HOME/runcluvfy.sh stage -pre crsinst -n MUMDCNODE1,MUMDCNODE2 -verbose
As Grid User
$GRID_HOME/runcluvfy.sh comp clocksync -n all -verbose

If you encounter the below error during the Cluster Verificaion, then reboot both Nodes and perform the Cluster Verfication once again.

ERROR:
PRVG-10467 : The default Oracle Inventory group could not be determined.

Bug

CVU may report package "compat-libcap1-1.10" as a requirement in OL8 and OL9 which is incorrect. This is only needed in OL7 .


Install GRID Infrastructure

Login with grid user and execute the following command.

Apply patch 6880880 to the Oracle GI home to upgrade the Opatch utility

Apply RU Patch
$GRID_HOME/gridSetup.sh -applyRU /u01/patch/36582629

The Grid Infrastructure GUI Installer will begin.

Step 1

  • Select "Configure Oracle Grid Infrastructure for a New Cluster"
Image title
Step 1

Step 2

  • Select "Configure an Oracle Standalone Cluster"
Image title
Step 2

Step 3

  • Give your Cluster a Name. In our case we will name it: MUMBAI-CLUSTER
  • SCAN Name: RAC-SCAN
  • SCAN Port: 1521
Image title
Step 3

Step 4

  • Click on Add and Add the Second Node to the list..

    • Public Hostname: MUMDCNODE2.HOMELAB.COM.
    • Virtual Hostname: MUMDCNODE2-vip.HOMELAB.COM
    • Virtual Hostname: MUMDCNODE2-vip.HOMELAB.COM
    • Click Next to Proceed onto the next step.
Image title
Step 4

Step 5

  • Select ASM & Private for interface ens192 as it will be used for the ASM Disk Drives and the Private Subnet.
Image title
Step 5

Step 6

  • Select Use Oracle Flex ASM for the Storage.
Image title
Step 6

Step 7

  • Select No for the Configure Grid Ingfrastructure Management Repository and Click Next.
Image title
Step 7

Step 8

  • Give the Disk Group a name, In our case we named it OCR.
  • Select External for the ASM Disks Redundancy.
  • Change the ASM Disk Discovery Path.
Image title
Step 8

ASM Disk Discovery Path

Change the ASM Disk Discovery Path to the following:

/dev/oracleasm/disks* 

Image title
Step 8 - Change ASM Disk Discovery Path


Step 9

  • Select Use same passwords for these accounts and specify a Password and proceed to the Next step.
Image title
Step 9

Step 10

  • Select Do not use Intellident Platform Management Interface (IPMI) and proceed to the Next step.
Image title
Step 10

Step 11

  • Leave the Register with Enterprise Manager (EM) Cloud Control unchecked and proceed to the Next step.
Image title
Step 11

Step 12

  • select asmadmin for Oracle ASM Administrator (OSASM) Group.
  • Select asmdba for Oracle ASM DBA(OSDBA for ASM) Group.
  • Select asmoper for Oracle ASM Operator (OSOPER for ASM) Group.
Image title
Step 12

Step 13

  • Cross check the Oracle Base Directory and proceed to the next step.
Image title
Step 13

Step 14

  • Cross check the Oracle Inventory Directory and proceed to the next step.
Image title
Step 14

Step 15

  • Leave the Automatically run Configuration scripts unchecked as we want full control of the Installation Process. We will execute the scripts manually as root user.
Image title
Step 15

Step 16.1

  • Prerequsite Checks will be performed.
Image title
Step 16 - Prerequsite Checks

Step 16.2

  • Check the Ignore All box and click Next to Proceed to the next step.
Image title
Step 16

Step 17

  • Verify the Configuration, Save the response file if required for future reference and proceed to the Next step.
Image title
Step 17

Step 18.1

  • The Installation of Grid Infrastructure will start.
Image title
Step 18

Step 18.2

  • When the Execute Configuration Scripts prompt appears, Open a new Terminal window as root user and execute both the scripts one at a time on Node1.
  • After the scripts have been executed on Node1 successfully. Execute it on Node2.
  • Once both the scripts have been executed successfully, click on OK.
Image title
Step 18

CRS Status

You can simultaneously check the status of the Cluster in a new tab with grid user.

watch crsctl stat res -t -init
watch crsctl status resource -t

CRS LOG Location

$GRID_BASE/diag/crs/dcnode1/crs/trace/alert.log

You can now proceed to Database Creation

Comments