runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose
Cluster Verification Pre Installation check failed.
Verifying Physical Memory ...FAILED (PRVF-7530)
Verifying Physical Memory ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 7.5443GB (7910784.0KB) 8GB (8388608.0KB) failed
node1 7.5443GB (7910784.0KB) 8GB (8388608.0KB) failed
Verifying Physical Memory ...FAILED (PRVF-7530)
------------ ------------------------ ------------------------ ----------
node2 7.5443GB (7910784.0KB) 8GB (8388608.0KB) failed
node1 7.5443GB (7910784.0KB) 8GB (8388608.0KB) failed
Verifying Physical Memory ...FAILED (PRVF-7530)
Reason: Recommended RAM requirement per Node is 8GB to install Oracle12c R2 clusterware. If you have less memory you will see this warning.
Solution:- Minimum 4GB RAM you need run the installation successfully. If you have 4GM you can safely ingnore this error and proceed.
Verifying Group Existence: asmadmin ...FAILED (PRVG-10461)
Verifying Group Existence: asmadmin ...
Node Name Status Comment
------------ ------------------------ ------------------------
node2 failed does not exist
node1 failed does not exist
Verifying Group Existence: asmadmin ...
Node Name Status Comment
------------ ------------------------ ------------------------
node2 failed does not exist
node1 failed does not exist
Verifying Group Existence: asmadmin ...FAILED (PRVG-10461)
Verifying Group Existence: asmdba ...
Node Name Status Comment
------------ ------------------------ ------------------------
node2 failed does not exist
node1 failed does not exist
Verifying Group Existence: asmdba ...
Node Name Status Comment
------------ ------------------------ ------------------------
node2 failed does not exist
node1 failed does not exist
Verifying Group Existence: asmdba ...FAILED (PRVG-10461)
Verifying Group Membership: asmadmin ...
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
node2 yes no no failed
node1 yes no no failed
Verifying Group Membership: asmadmin ...FAILED (PRVG-10460)
Verifying Group Membership: asmdba ...FAILED (PRVG-10460)
Verifying Group Membership: asmdba ...
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
node2 yes no no failed
node1 yes no no failed
Verifying Group Membership: asmdba ...FAILED (PRVG-10460)
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
node2 yes no no failed
node1 yes no no failed
Verifying Group Membership: asmdba ...FAILED (PRVG-10460)
Reason: due to the security reasons and sepration of duties there are varios group required. Oracle recommends to create and use these groups.
For more information you can check oracle clusterware installation guide.
Solution:- I am using orainst as primary and dba as secondry group. As long as you are not intendet to use other groups you can safely ignore these warnnings and proceed
Verifying Node Connectivity ...FAILED (PRVG-1172, PRVG-11067, PRVG-11095)
Verifying Node Connectivity ...FAILED (PRVG-1172, PRVG-11067, PRVG-11095)
Reason:- Some interface on your server is not configured properly as expected by Oracle clusterware.
Solution:- Investigate and fix the problem if this is the interface (private/public) participating in your clusterware installation.
Verifying Multicast check ...FAILED (PRVG-11138)
Verifying Multicast check ...
Checking subnet "192.9.1.0" for multicast communication with multicast group "224.0.0.251"
Checking subnet "10.0.0.0" for multicast communication with multicast group "224.0.0.251"
Checking subnet "192.168.122.0" for multicast communication with multicast group "224.0.0.251"
Verifying Multicast check ...FAILED (PRVG-11138)
Verifying Multicast check ...FAILED (PRVG-11138)
Reason:- firewall or iptables or running. you can verify the same by running
systemctl status firewalld
systemctl status iptables.service
Solution: disable firewall and/or iptables
systemctl status firewalld
systemctl status iptables.service
Solution: disable firewall and/or iptables
systemctl stop firewalld
systemctl disable firewalld
systemctl disable firewalld
systemctl stop iptables.service
systemctl disable iptables.service
systemctl disable iptables.service
Verifying NTP daemon is synchronized with at least one external time source ...FAILED (PRVG-13602)
Verifying Network Time Protocol (NTP) ...FAILED (PRVG-1063)
Verifying NTP daemon is synchronized with at least one external time source ...FAILED (PRVG-13602)
Verifying Network Time Protocol (NTP) ...FAILED (PRVG-1063)
Reason:- You are willing to use NTP but ntp server is not configured to synchronize the time from external source
Solution: Either disable NTP and let the CTSS takecare of time synchronization
or
Configure NTP server accordingly
Verifying Daemon "avahi-daemon" not configured and running ...FAILED (PRVG-1360)
Verifying Daemon "avahi-daemon" not configured and running ...
Node Name Configured Status
------------ ------------------------ ------------------------
node2 no passed
node1 no passed
Node Name Configured Status
------------ ------------------------ ------------------------
node2 no passed
node1 no passed
Node Name Running? Status
------------ ------------------------ ------------------------
node2 yes failed
node1 yes failed
------------ ------------------------ ------------------------
node2 yes failed
node1 yes failed
Verifying Daemon "avahi-daemon" not configured and running ...FAILED (PRVG-1360)
Reason:Daemon "avahi-daemon" not configured and running
Solution :-
Stop avahi-daemon damon if it not configured
systemctl status avahi-daemon
systemctl stop avahi-daemon
systemctl disable avahi-daemon
systemctl stop avahi-daemon
systemctl disable avahi-daemon
Verifying zeroconf check ...FAILED (PRVE-10077)
Verifying zeroconf check ...FAILED (PRVE-10077)
Reason: NOZEROCONF parameter is not configured
Solution
enter following in /etc/sysconfig/network
NOZEROCONF=yes
Enjoy HA.....
No comments:
Post a Comment