Oracle 19c Grid Infrastructure Installation on RHEL 8.10
OS Pre-Requisites for Node1
and Node2
.
Stop and disable the firewall
We can enable firewall after installation
systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl stop avahi-daemon
Start the Chrony NTP Configuration (Network Time Protocol)
systemctl enable chronyd.service
systemctl restart chronyd.service
chronyc -a 'burst 4/4'
chronyc -a makestep
Install nano, wget. dnsmasq and iSCSI Initiator Utilities
yum install nano wget dnsmasq iscsi-initiator-utils -y
systemctl enable iscsi.service
systemctl start iscsi.service
systemctl status iscsi.service
Change selinux to Permissive
1. Edit the SELinux configuration file:
Open the /etc/selinux/config
file and set SELINUX
to permissive
.
sudo nano /etc/selinux/config
SELINUX=disabled
SELINUX=permissive
2. Reboot the system:
Since SELinux is currently disabled, a reboot is required to activate it.
reboot
3. Verify the status after reboot:
Check if SELinux is in permissive mode:
sestatus
SELinux status: enabled
Current mode: permissive
setenforce
:
If SELinux is enabled and running, you can toggle between enforcing and permissive modes at runtime:
setenforce 0
Change the Kernal if Required
Caution
Change the Kernal if required
ls -l /boot/vmlinuz-*
grubby --set-default /boot/vmlinuz-4.18.0-553.8.1.el8_10.x86_64
Import the Oracle Linux GPG key for Red Hat Enterprise Linux 8
wget https://yum.oracle.com/RPM-GPG-KEY-oracle-ol8 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpg --import --import-options show-only /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Ceate a temporary yum repository configuration file /etc/yum.repos.d/ol8-temp.repo
with the following as the minimum required content:
nano /etc/yum.repos.d/ol8-temp.repo
[ol8_baseos_latest]
name=Oracle Linux 8 BaseOS Latest ($basearch)
baseurl=https://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
Install oraclelinux-release-el8:
dnf install oraclelinux-release-el8
Remove ol8-temp.repo
and any other remaining repo files that may conflict with Oracle Linux yum server:
mv /etc/yum.repos.d/ol8-temp.repo /etc/yum.repos.d/ol8-temp.repo.disabled
yum update
dnf config-manager --enable ol8_UEKR7
dnf clean all
dnf update
Oracle RAC Prerequisites
The package oracle-database-preinstall-19c
contains all the prerequisites on Oracle Linux using the Oracle Unbreakable Enterprise Kernel (UEK).
Install Oracle Database Packages
wget https://public-yum.oracle.com/repo/OracleLinux/OL8/appstream/x86_64/getPackage/oracle-database-preinstall-19c-1.0-2.el8.x86_64.rpm
yum install oracle-database-preinstall-19c-1.0-2.el8.x86_64.rpm
Download & Install Oracle ASMLib v3
Oracle ASMLib v3 Download Link
dnf install kmod-redhat-oracleasm
This is required for proper functioning of Oracle Automatic Storage Management
yum install oracleasmlib-3.0.0-13.el8.x86_64.rpm
yum install oracleasmlib-3.0.0-13.el8.x86_64.rpm
wget https://yum.oracle.com/repo/OracleLinux/OL8/addons/x86_64/getPackage/oracleasm-support-3.0.0-6.el8.x86_64.rpm
yum install oracleasm-support-3.0.0-6.el8.x86_64.rpm
Create Grid User and the required directories
groupadd -g 5004 asmadmin
groupadd -g 5005 asmdba
groupadd -g 5006 asmoper
useradd -u 5008 -g oinstall -G asmadmin,asmdba,asmoper,dba grid
usermod -g oinstall -G dba,oper,asmdba oracle
usermod -g oinstall -G asmadmin,asmdba,asmoper,dba grid
mkdir -p /u01/app/grid/19.3.0/gridhome_1
mkdir -p /u01/app/grid/gridbase/
mkdir -p /u01/app/oracle/database/19.3.0/dbhome_1
chown -R oracle:oinstall /u01/
chown -R grid:oinstall /u01/app/grid
chmod -R 775 /u01/
Change the Password for oracle
user and grid
user.
passwd oracle
passwd grid
Set Limits
Add below entries in /etc/security/limits.conf
file which will define limits
nano /etc/security/limits.conf
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 16384 I
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
oracle hard memlock 134217728
oracle soft memlock 134217728
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 16384
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768
grid hard memlock 134217728
Add Hosts to /etc/hosts
Public IP
: The public IP address is for the server. This is the same as any server IP address, a unique address with exists in /etc/hosts
.
Private IP
: Oracle RCA requires “private IP” addresses to manage the CRS, the clusterware heartbeat process and the cache fusion layer.
Virtual IP
: Oracle uses a Virtual IP (VIP) for database access. The VIP must be on the same subnet as the public IP address. The VIP is used for RAC failover (TAF).
nano /etc/hosts
# Public
192.168.1.110 MUMDCNODE1.HOMELAB.COM MUMDCNODE1
192.168.1.120 MUMDCNODE2.HOMELAB.COM MUMDCNODE2
# Private
192.168.10.110 MUMDCNODE1-PRIV.HOMELAB.COM MUMDCNODE1-PRIV
192.168.10.120 MUMDCNODE2-PRIV.HOMELAB.COM MUMDCNODE2-PRIV
# Virtual
192.168.1.70 MUMDCNODE1-VIP.HOMELAB.COM MUMDCNODE1-VIP
192.168.1.80 MUMDCNODE2-VIP.HOMELAB.COM MUMDCNODE2-VIP
# SCAN
192.168.1.41 RAC-SCAN.HOMELAB.COM RAC-SCAN
192.168.1.42 RAC-SCAN.HOMELAB.COM RAC-SCAN
192.168.1.43 RAC-SCAN.HOMELAB.COM RAC-SCAN
Ping all Public, Private and Virtual IP's mentioned in host file.
DNS Settings for Node1
and Node2
DNS is needed for RAC installation. It is another prerequisite. Because it is test environment we will follow below steps on Node1
and Node2
.
dnsmasq
Rpm Should Be Installed
rpm -qa | grep dnsmasq
dnsmasq-2.76-16.el7_9.1.x86_64
Edit /etc/resolv.conf
File Like Below
nano /etc/resolv.conf
nameserver 127.0.0.1
search HOMELAB.COM
options timeout:1
options attempts:5
Make Write-Protected /etc/resolv.conf
File
chattr +i /etc/resolv.conf
If Needed To Disable Write-Protected Mode Run Below Command
chattr -i /etc/resolv.conf
Add Below Entries To /etc/dnsmasq.conf
File
nano /etc/dnsmasq.conf
except-interface=virbr0
bind-interfaces
addn-hosts=/etc/racdns.conf
Create /etc/racdns
File And Enter Below Entries
nano /etc/racdns.conf
192.168.1.41 RAC-SCAN.HOMELAB.COM RAC-SCAN
192.168.1.42 RAC-SCAN.HOMELAB.COM RAC-SCAN
192.168.1.43 RAC-SCAN.HOMELAB.COM RAC-SCAN
Restart Service for the changes to take effect
systemctl restart dnsmasq
Test the DNS
nslookup MUMDCNODE1
Test the DNS
nslookup RAC-SCAN
It Should return the following Output
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: ORA-SCAN.HOMELAB.COM
Address:192.168.1.41
Name: ORA-SCAN.HOMELAB.COM
Address:192.168.1.42
Name: ORA-SCAN.HOMELAB.COM
Address:192.168.1.43
Rename the iSCSI Initiator on each hosts and modify the initiator name
nano /etc/iscsi/initiatorname.iscsi
cat /etc/iscsi/initiatorname.iscsi
It Should return the following Output
InitiatorName=iqn.2025-01.com.Node1:MUMBAIDCNODE1
Discover the iSCSI Disks
iscsiadm -m discovery -t sendtargets -p 192.168.1.100
Enable automatic login during startup
iscsiadm -m node --op update -n node.startup -v automatic
Rescan the existing sessions using:
iscsiadm -m session --rescan
Login to the target server
iscsiadm -m node -T iqn.2024-12.com.asmdisks:target1 --login
Kill the existing sessions using:
iscsiadm -m node -T <iqn> -p <ip address>:<port number> -u
To log out of a specific system target, enter the following command:
iscsiadm --mode node --target iqn.2024-12.com.asmdata:target1 --portal 192.168.1.100 --logout
Display a list of all current sessions logged in:
iscsiadm -m session
Storage server
On storage Server make sure the following is set
Global pref auto_save_on_exit=true
Configuration saved to /etc/target/saveconfig.json
Check List of iSCSI attached Disks
fdisk -l
or
lsblk -I 8 -d
or
lsblk -f
Create the Partitions
fdisk /dev/sdb
Press `n` (add a new partition).
Press `p` (select p for Primary).
`Enter`
`Enter`
`Enter`
Press `w` (write table to disk and exit)
Install the package cvudisk from the grid
Home.
Download Oracle Database 19c Grid Infrastructure (19.3) for Linux x86-64 and copy the .zip file to $GRID_HOME
and extract the contents.
cd /u01/app/grid/19.3.0/gridhome_1/
chmod 775 LINUX.X64_193000_grid_home.zip
unzip -q LINUX.X64_193000_grid_home.zip
Install the package cvudisk from the grid home
as the root
user on all nodes.
Without cvuqdisk, Cluster Verification Utility cannot discover shared disks, and you receive the error message Package cvuqdisk not installed
when you run Cluster Verification Utility.
cd /u01/app/grid/19.3.0/gridhome_1/cv/rpm
rpm -Uvh cvuqdisk*
You Should get the following output on Node1
Verifying... ################################# [100%]
Preparing... ################################# [100%]
Using default group oinstall to install package
Updating / installing...
1:cvuqdisk-1.0.10-1 ################################# [100%]
Copy the same on the 2nd Node and execute as root
.
scp ./cvuqdisk* root@MUMDCNODE2:/tmp
rpm -Uvh /tmp/cvuqdisk*
Node2
Verifying... ################################# [100%]
Preparing... ################################# [100%]
Using default group oinstall to install package
Updating / installing...
1:cvuqdisk-1.0.10-1 ################################# [100%]
Configure Oracle ASM
First check internet is working fine or not ping google.com
/usr/sbin/oracleasm configure -i
Output
Default user to own the driver interface [ ]: grid
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Maximum number of disks that may be used in ASM system [2048]:
Enable iofilter if kernel supports it (y/n) [y]: n
Writing Oracle ASM library driver configuration: done
Very the Configuration
Initialize the asmlib with the oracleasm init
command.
/usr/sbin/oracleasm init
systemctl enable oracleasm
systemctl start oracleasm
oracleasm createdisk ASM_DATA1 /dev/sda1
oracleasm createdisk ASM_OCR1 /dev/sdb1
oracleasm createdisk ASM_FLASH1 /dev/sdc1
oracleasm createdisk ASM_ARC1 /dev/sdd1
Scan the newly created ASM Disks.
oracleasm scandisks
oracleasm listdisks
If you want to Remove any existing ASM disk
oracleasm deletedisk DISK_NAME
Check ASM Disks Permissions
They should be grid:asmadmin
ls -lrth /dev/oracleasm/*
brw-rw----. 1 grid asmadmin 8, 17 Feb 11 10:38 ASM_OCR1
brw-rw----. 1 grid asmadmin 8, 33 Feb 11 10:38 ASM_FLASH1
brw-rw----. 1 grid asmadmin 8, 1 Feb 11 10:38 ASM_DATA1
brw-rw----. 1 grid asmadmin 8, 49 Feb 11 10:38 ASM_ARC1