Saturday, 31 July 2010

Scan Reconfiguration

As you may know, SCAN (Single Client Access Name) is a new feature that has been introduced in 11.2 Oracle RAC. To put it simply, the SCAN is actually a single name alias which can be configured for clients to connect to the cluster database. In addition it will also handle the load balancing and failover for client connections. When the nodes in the cluster changes (eg. added or removed), the benefits of SCAN can be realised, as there is no requirement for the client to be reconfigured.

A good example on configuring SCAN for clients, Pas from Oracle has written an article which explains how this can be achieved. Just visit his blog entry Using SCAN – Single Client Access Name to Connect to 11g R2 RAC from JDeveloper 11g.

Configuring your clients to used SCAN in 11.2 or later is optional, however when you install the Grid Infrastructure software for RAC, the SCAN is still required to be configured for setup of the cluster.
It is recommended that the SCAN is resolvable to 3 IP addresses. However a minimum of 1 IP Address may be configured. It should also be registered in DNS for round robin resolution.

Reconfiguration of SCAN

There may be situations where reconfiguration of the SCAN for the cluster is required.
Some examples are:
  • Allocation of IP addresses has changed for SCAN
  • Adding additional IP addresses for SCAN
  • DNS Changes associated with SCAN addressing
  • SCAN name change^
NOTE: ^ SCAN name change may be required if there is conflicting name for another cluster on same network or in DNS, however this would mean that client reference to old SCAN is required be updated. Hence caution should be taken to ensure that SCAN name is unique in DNS and network.

In the below scenario, the following steps will show how SCAN addressing can be reconfigured for a cluster.

At the time of the installation, the DNS registration had not been setup. As a workaround, one of the IP addresses allocated for the SCAN was referenced in the /etc/hosts file. Once the installation was completed and DNS registration updated, the SCAN was then reconfigured to be resolvable through DNS for all 3 IP addresses.

Below is the summary of the current and new configuration for SCAN.

CurrentNew
SCAN Namemyscanmyscan
SCAN IP Address172.43.22.89172.43.22.89
172.43.22.90
172.43.22.91
SCAN entry in /etc/hosts fileYes*No
Registered in DNS and configured for round robinNoYes

NOTE: SCAN entry in the /etc/hosts file is not recommended, hence the reconfiguration to the NEW is required.

1. Verify Current Configuration for SCAN VIP
oracle@lnx01[GRID]:/app/oracle> srvctl config scan
SCAN name: myscan, Network: 1/172.43.22.0/255.255.255.0/bge0:nxge0
SCAN VIP name: scan1, IP: /myscan.earth.com/172.43.22.89
2. Verify status of SCAN Listener:
oracle@lnx01[GRID]:/app/oracle> srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node lnx02
3. Verify Current Configuration for SCAN Listener
oracle@lnx01[GRID]:/app/oracle> srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
4. Verify Status for SCAN Listener
oracle@lnx01[GRID]:/app/oracle> srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node lnx02
5. Stop the SCAN Listener
oracle@lnx01[GRID]:/app/oracle> srvctl stop scan_listener
6. Stop the SCAN VIPs
oracle@lnx01[GRID]:/app/oracle> srvctl stop scan
7. Verify Status for SCAN Listener has stopped
oracle@lnx01[GRID]:/app/oracle> srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is not running
8. Verify Status for SCAN VIP has stopped
oracle@lnx01[GRID]:/app/oracle> srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is not running
9. Update SCAN in DNS and associated IPs as required 10. Verify that resolv.conf is configured to search DNS domain that SCAN was registered for
oracle@lnx01[GRID]:/app/oracle> cat /etc/resolv.conf
search        earth.com
nameserver      172.43.7.1
nameserver      172.44.7.1
11. Verify that in DNS that that the SCAN is resolvable against all three IP Addresses
oracle@lnx01[GRID]:/app/oracle> nslookup myscan
Server:         172.43.7.1
Address:        172.43.7.1#53
Name:   myscan.earth.com
Address: 172.43.22.89
Name:   myscan.earth.com
Address: 172.43.22.90
Name:   myscan.earth.com
Address: 172.43.22.91
12. Verify that the scan IP and address name has been removed or commented out from the /etc/hosts file:
oracle@lnx01[GRID]:/app/oracle> grep myscan /etc/hosts
NOTE: In this example no rows are returned as IP and address name has been removed for scan
13. Modify the SCAN resource so that it update all SCAN VIPs resolvable by DNS
oracle@lnx01[GRID]:/app/oracle> sudo srvctl modify scan -n myscan
14. If the SCAN name was changed as well, run the following to update in Cluster Resource Control
oracle@lnx01[GRID]:/app/oracle> sudo crsctl modify type ora.scan_vip.type -attr "ATTRIBUTE=SCAN_NAME,DEFAULT_VALUE=myscan"
15. Verify that all SCAN VIPs are now resolvable and registered with the SCAN resource
oracle@lnx01[GRID]:/app/oracle> srvctl config scan
SCAN name: myscan, Network: 1/172.43.22.0/255.255.255.0/bge0:nxge0
SCAN VIP name: scan1, IP: /myscan.earth.com/172.43.22.90
SCAN VIP name: scan2, IP: /myscan.earth.com/172.43.22.91
SCAN VIP name: scan3, IP: /myscan.earth.com/172.43.22.89
16. Update the SCAN Listener with new SCAN VIPs identified from DNS
oracle@lnx01[GRID]:/app/oracle> srvctl modify scan_listener -u
17. Verify new SCAN Listeners registered for new SCAN VIPs
oracle@lnx01[GRID]:/app/oracle> srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
18. Start SCAN Listener NOTE: This will also start the SCAN VIPs
oracle@lnx01[GRID]:/app/oracle> srvctl start scan_listener
19. Verify Status of all three SCAN VIPs have been enabled and are running.
oracle@lnx01[GRID]:/app/oracle> srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node lnx02
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node lnx01
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node lnx01
20. Verify Status of all three SCAN Listeners have been enabled and are running.
oracle@lnx01[GRID]:/app/oracle> srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node lnx02
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node lnx01
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node lnx01

Friday, 21 May 2010

Cloning a RAC Database

Cloning a RAC Database may appear somewhat challenging, but its actually cloning a database as if it was a single instance and then enabling it as a cluster post to the duplication.
In the below scenario, I will demonstrate how this can be achieved by cloning a RAC database and have it manage by the same cluster. Please note that the following was tested for version 11.1 RAC with ASM.
The RAC configuration used for this example:

Configuration Source Target (Auxiliary)
Hostlnx01lnx02
Host VIPslnx01-viplnx02-vip
DB NameSALTPEPPER
Instance Name #1SALT1PEPPER1
Instance Name #2SALT2PEPPER2
Control File Location+DG_DATA,+DG_REDO+DG_DATA,+DG_REDO
Redo File Location+DG_DATA,+DG_REDO+DG_DATA,+DG_REDO
Data File Location+DG_DATA+DG_DATA
Archive Destination Location+DG_ARC+DG_ARC
Remote ListenerLISTENER_SALTLISTENER_PEPPER
Local Listener #1LISTENER_SALT1LISTENER_PEPPER1
Local Listener #1LISTENER_SALT2LISTENER_PEPPER2
NOTE*:  Control file and online redo log copies, archived redo log files can be stored in the Flash Recovery Area (FRA) Disk Group.
1. Log onto source RAC database:
[oracle@lnx01] export ORACLE_SID=SALT1; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> create pfile='/app/oracle/scripts/clone/initPEPPER.ora';
2. Edit /tmp/initPEPPER.ora and change the appropriate values.
Remove the following entries:
<SOURCE_INSTANCE1>.__db_cache_size=7650410496
<SOURCE_INSTANCE2>.__db_cache_size=7583301632
<SOURCE_INSTANCE1>.__java_pool_size=67108864
<SOURCE_INSTANCE2>.__java_pool_size=67108864
<SOURCE_INSTANCE1>.__large_pool_size=67108864
<SOURCE_INSTANCE2>.__large_pool_size=67108864
<SOURCE_INSTANCE1>.__oracle_base='/app/oracle'
<SOURCE_INSTANCE2>.__oracle_base='/app/oracle'
<SOURCE_INSTANCE1>.__pga_aggregate_target=6643777536
<SOURCE_INSTANCE2>.__pga_aggregate_target=6643777536
<SOURCE_INSTANCE1>.__sga_target=9462349824
<SOURCE_INSTANCE2>.__sga_target=9462349824
<SOURCE_INSTANCE1>.__shared_io_pool_size=0
<SOURCE_INSTANCE2>.__shared_io_pool_size=0
<SOURCE_INSTANCE1>.__shared_pool_size=1409286144
<SOURCE_INSTANCE2>.__shared_pool_size=1476395008
<SOURCE_INSTANCE1>.__streams_pool_size=134217728
<SOURCE_INSTANCE2>.__streams_pool_size=134217728
3. Adjust Memory Target accordingly for your environment
*.memory_target=2G
4. Set spfile parameter:
*.spfile='+DG_DATA/PEPPER/spfilePEPPER.ora'
5. Disable the cluster_database parameter:
*.cluster_database=FALSE
6. Create init parameter under dbs:
[oracle@lnx01] export ORACLE_HOME=/app/oracle/product/11.1
[oracle@lnx01] cd $ORACLE_HOME/dbs
[oracle@lnx01] echo "SPFILE='+DG_DATA/PEPPER/spfilePEPPER.ora'" > $ORACLE_HOME/dbs/initPEPPER.ora
7. Edit initPEPPER.ora and add the following convert parameters:
*.log_file_name_convert=('+DG_DATA/SALT','+DG_DATA/PEPPER')
*.db_file_name_convert=('+DG_DATA/SALT','+DG_DATA/PEPPER')
8. Create the directories:
a) From one node only, create the ASM path for PEPPER database:
[oracle@lnx01] export ORACLE_SID=+ASM1; . oraenv
[oracle@lnx01] asmcmd mkdir +DG_DATA/PEPPER
[oracle@lnx01] asmcmd mkdir +DG_DATA/PEPPER/brsadata
b) From one node only, create the OS path for PEPPER admin directories:
[oracle@lnx01] mkdir -p /app/oracle/admin/PEPPER/adump
[oracle@lnx01] mkdir -p /app/oracle/diag/rdbms/PEPPER/PEPPER1/cdump
[oracle@lnx01] mkdir -p /app/oracle/diag/rdbms/PEPPER/PEPPER1/trace
c) Repeat step b on node 2 and other nodes in RAC:
[oracle@lnx02] mkdir -p /app/oracle/admin/PEPPER/adump
[oracle@lnx02] mkdir -p /app/oracle/diag/rdbms/PEPPER/PEPPER2/cdump
[oracle@lnx02] mkdir -p /app/oracle/diag/rdbms/PEPPER/PEPPER2/trace
9. Create a TNS entry for target auxiliary database and place in the tnsnames.ora. Add these entries on all nodes in cluster:
PEPPER =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01-vip)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx02-vip)(PORT = 1521))
 (CONNECT_DATA =
 (SERVICE_NAME = PEPPER)
 )
 )

LISTENERS_PEPPER =
 (ADDRESS_LIST =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01-vip)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx02-vip)(PORT = 1521))
 )

LISTENER_PEPPER1 =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01-vip)(PORT = 1521))

LISTENER_PEPPER2 =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx02-vip)(PORT = 1521))
10. If no listener.ora exists, create one as follows
# Node 1
 LISTENER_lnx01 =
 (DESCRIPTION_LIST =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01-vip)(PORT = 1521)(IP = FIRST))
 (ADDRESS = (PROTOCOL = TCP)(HOST = 207.23.2.122)(PORT = 1521)(IP = FIRST))
 )
 )
# Node 2
 LISTENER_lnx02 =
 (DESCRIPTION_LIST =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx02-vip)(PORT = 1521)(IP = FIRST))
 (ADDRESS = (PROTOCOL = TCP)(HOST = 207.23.2.126)(PORT = 1521)(IP = FIRST))
 )
 )
11. Create an SPFILE from the parameter file, and start the auxiliary instance:
[oracle@lnx01] export ORACLE_HOME=/app/oracle/product/11.1
[oracle@lnx01] export ORACLE_SID=PEPPER
SQL> CONNECT / as sysdba
SQL> CREATE SPFILE='+DG_DATA/PEPPER/spfilePEPPER.ora'
FROM PFILE='/app/oracle/scripts/clone/initPEPPER.ora';
SQL> STARTUP NOMOUNT;
12. Then run the following RMAN commands to duplicate the database:
[oracle@lnx01] rman target sysdba/<password>@SALT auxiliary /
RMAN> DUPLICATE TARGET DATABASE TO 'PEPPER';
14. Add the entry for oratab file (located under /etc or /var/opt/oracle) for target auxiliary db:
a) Add in For node 1 oratab file:
PEPPER1:/app/oracle/product/11.1:N
b) Add in For node 2 oratab file:
PEPPER2:/app/oracle/product/11.1:N
15. Enable the cluster parameter in the target auxiliary database from one node only
[oracle@lnx01] export ORACLE_HOME=/app/oracle/product/11.1
[oracle@lnx01] export ORACLE_SID=PEPPER
[oracle@lnx01] sqlplus / as sysdba
SQL> alter system set cluster_database=TRUE scope=spfile;
SQL> shutdown immediate;
16. Rename the init file and then copy the init file across to the
[oracle@lnx01] export ORACLE_SID=PEPPER1; . oraenv
[oracle@lnx01] mv $ORACLE_HOME/dbs/initPEPPER.ora $ORACLE_HOME/dbs/initPEPPER1.ora
[oracle@lnx01] scp $ORACLE_HOME/dbs/initPEPPER1.ora \
lnx02:/app/oracle/product/11.1/dbs/initPEPPER2.ora
17. Add the second thread of online redo logs and enable that thread;
SQL> startup
18. Start up the second instance:
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 5 '+DG_DATA' SIZE 250M reuse;
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+DG_REDO' TO GROUP 5 ;
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 6  '+DG_REDO' SIZE 250M reuse;
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+DG_DATA' TO GROUP 6;
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 7 '+DG_REDO' SIZE 250M reuse;
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+DG_DATA' TO GROUP 7;
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 8 '+DG_REDO' SIZE 250M reuse;
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+DG_DATA' TO GROUP 8;
SQL> ALTER DATABASE ENABLE PUBLIC THREAD 2;
19. Stop all instances on each node for the target auxiliary database
a) From node 1 perform the following:
[oracle@lnx01] export ORACLE_SID=PEPPER1; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> shutdown immediate;
b) Then perform for other instances in the cluster:
[oracle@lnx01] export ORACLE_SID=PEPPER2; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> shutdown immediate;
20. To add into the cluster, add the new database to srvctl:
[oracle@lnx01] srvctl add database -d PEPPER -o /app/oracle/product/11.1
[oracle@lnx01] srvctl modify database -d PEPPER -p '+DG_DATA/PEPPER/spfilePEPPER.ora' -s open
[oracle@lnx01] srvctl add instance -d PEPPER -i PEPPER1 -n lnx01
[oracle@lnx01] srvctl add instance -d PEPPER -i PEPPER2 -n lnx02
20. Start the RAC database
[oracle@lnx01] srvctl start database -d PEPPER
21. Now verify that the RAC database has now registered with the cluster.
For pre-11.2 run:
crs_stat –t
Or for 11.2 run:
crsctl stat res -t