Showing posts with label target. Show all posts
Showing posts with label target. Show all posts

Wednesday, 4 May 2011

Cluster Resource Control Filters

In 11.2 RAC, to avoid listing all cluster resources (via CRSCTL), you can use the filter switch option “-w”. This can be useful if you only want to look at a specific status of a given resource.

A. List for All Databases Managed by Cluster 


In the example, shows the listing of status for all databases managed by the cluster.
[oracle@lnx01] export ORACLE_SID=GRID; . oraenv
[oracle@lnx01] crsctl stat res -t -w "TYPE = ora.database.type"
The output for the above example is as follows:
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.salt.db
      1        ONLINE  ONLINE       lnx01                 Open              
      2        ONLINE  ONLINE       lnx02                 Open              
ora.pepper.db
      1        ONLINE  ONLINE       lnx01                 Open              
      2        ONLINE  ONLINE       lnx02                 Open      
       
A list below are some type of resources that the filter can set for:

NOTE: The filter from the example can be change from “ora.database.type” to “ora.service.type”. For more options see listing below.
[oracle@lnx01] crsctl stat res |grep "TYPE=" |uniq
Sample output:
TYPE=ora.diskgroup.type
TYPE=ora.listener.type
TYPE=ora.scan_listener.type
TYPE=ora.diskgroup.type
TYPE=application
TYPE=ora.asm.type
TYPE=ora.cluster_vip_net1.type
TYPE=ora.cvu.type
TYPE=ora.database.type
TYPE=ora.service.type
TYPE=ora.database.type
TYPE=ora.service.type
TYPE=ora.gsd.type
TYPE=ora.network.type
TYPE=ora.oc4j.type
TYPE=ora.ons.type
TYPE=ora.scan_vip.type
TYPE=ora.database.type

B. Check Cluster Resources that are offline 


Example:
[oracle@lnx01] crsctl stat res -t -w "STATE = OFFLINE"
NOTE: If not using GSD, then it is ok for the following resource to be offline.
Sample Output:
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.gsd
OFFLINE OFFLINE      lnx01
OFFLINE OFFLINE      lxn02

C. Verifying local resources that are online


Example:
[oracle@lnx01] crsctl stat res -init -t -w "STATE = ONLINE"

NOTE: You can use “-init” to check Local Resources

Sample Output:
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1        ONLINE  ONLINE       lnx01                 Started
ora.cluster_interconnect.haip
1        ONLINE  ONLINE       lnx01
ora.crf
1        ONLINE  ONLINE       lnx01
ora.crsd
1        ONLINE  ONLINE       lnx01
ora.cssd
1        ONLINE  ONLINE       lnx01
ora.cssdmonitor
1        ONLINE  ONLINE       lnx01
ora.ctssd
1        ONLINE  ONLINE       lnx01                 OBSERVER
ora.diskmon
1        ONLINE  ONLINE       lnx01
ora.drivers.acfs
1        ONLINE  ONLINE       lnx01
ora.evmd
1        ONLINE  ONLINE       lnx01
ora.gipcd
1        ONLINE  ONLINE       lnx01
ora.gpnpd
1        ONLINE  ONLINE       lnx01
ora.mdnsd
1        ONLINE  ONLINE       lnx01

D. To check for resources that are Not Online


When I run the “crsctl stat res -t” and it print lines and lines that go on forever of all the resources for your cluster, I find that at times you can easily oversee resources that are in a pending or bad status that need special attention. Don’t get me wrong, it is nice to know about all your resources. However, there is a better way that you can do a status check, and report only the pending or bad resources.

Note that the “crsctl stat res -t -w “((TARGET = OFFLINE) or (STATE = OFFLINE)” will only report resources with the OFFLINE target or state. It will not report other target or state information. 

Hence I will show you below in an example on how to capture resources with pending (starting, intermediate) or bad (offline, unknown etc…) status. Therefore I want to report all resources with the target or state that is not ONLINE.

In addition, I want to also exclude that annoying gsd (ora.gsd) resource from the check, as we are not hosting any oracle 9i RAC database on this cluster, therefore the target and state for this resource will always show as being OFFLINE.
Example: [oracle@lnx01] crsctl stat res -t -w "((TARGET != ONLINE) or (STATE != ONLINE) and TYPE != ora.gsd))"
NOTE: No output is shown if cluster resources are up and running.  Otherwise impacted resources will be displayed.

Friday, 21 May 2010

Cloning a RAC Database

Cloning a RAC Database may appear somewhat challenging, but its actually cloning a database as if it was a single instance and then enabling it as a cluster post to the duplication.
In the below scenario, I will demonstrate how this can be achieved by cloning a RAC database and have it manage by the same cluster. Please note that the following was tested for version 11.1 RAC with ASM.
The RAC configuration used for this example:

Configuration Source Target (Auxiliary)
Hostlnx01lnx02
Host VIPslnx01-viplnx02-vip
DB NameSALTPEPPER
Instance Name #1SALT1PEPPER1
Instance Name #2SALT2PEPPER2
Control File Location+DG_DATA,+DG_REDO+DG_DATA,+DG_REDO
Redo File Location+DG_DATA,+DG_REDO+DG_DATA,+DG_REDO
Data File Location+DG_DATA+DG_DATA
Archive Destination Location+DG_ARC+DG_ARC
Remote ListenerLISTENER_SALTLISTENER_PEPPER
Local Listener #1LISTENER_SALT1LISTENER_PEPPER1
Local Listener #1LISTENER_SALT2LISTENER_PEPPER2
NOTE*:  Control file and online redo log copies, archived redo log files can be stored in the Flash Recovery Area (FRA) Disk Group.
1. Log onto source RAC database:
[oracle@lnx01] export ORACLE_SID=SALT1; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> create pfile='/app/oracle/scripts/clone/initPEPPER.ora';
2. Edit /tmp/initPEPPER.ora and change the appropriate values.
Remove the following entries:
<SOURCE_INSTANCE1>.__db_cache_size=7650410496
<SOURCE_INSTANCE2>.__db_cache_size=7583301632
<SOURCE_INSTANCE1>.__java_pool_size=67108864
<SOURCE_INSTANCE2>.__java_pool_size=67108864
<SOURCE_INSTANCE1>.__large_pool_size=67108864
<SOURCE_INSTANCE2>.__large_pool_size=67108864
<SOURCE_INSTANCE1>.__oracle_base='/app/oracle'
<SOURCE_INSTANCE2>.__oracle_base='/app/oracle'
<SOURCE_INSTANCE1>.__pga_aggregate_target=6643777536
<SOURCE_INSTANCE2>.__pga_aggregate_target=6643777536
<SOURCE_INSTANCE1>.__sga_target=9462349824
<SOURCE_INSTANCE2>.__sga_target=9462349824
<SOURCE_INSTANCE1>.__shared_io_pool_size=0
<SOURCE_INSTANCE2>.__shared_io_pool_size=0
<SOURCE_INSTANCE1>.__shared_pool_size=1409286144
<SOURCE_INSTANCE2>.__shared_pool_size=1476395008
<SOURCE_INSTANCE1>.__streams_pool_size=134217728
<SOURCE_INSTANCE2>.__streams_pool_size=134217728
3. Adjust Memory Target accordingly for your environment
*.memory_target=2G
4. Set spfile parameter:
*.spfile='+DG_DATA/PEPPER/spfilePEPPER.ora'
5. Disable the cluster_database parameter:
*.cluster_database=FALSE
6. Create init parameter under dbs:
[oracle@lnx01] export ORACLE_HOME=/app/oracle/product/11.1
[oracle@lnx01] cd $ORACLE_HOME/dbs
[oracle@lnx01] echo "SPFILE='+DG_DATA/PEPPER/spfilePEPPER.ora'" > $ORACLE_HOME/dbs/initPEPPER.ora
7. Edit initPEPPER.ora and add the following convert parameters:
*.log_file_name_convert=('+DG_DATA/SALT','+DG_DATA/PEPPER')
*.db_file_name_convert=('+DG_DATA/SALT','+DG_DATA/PEPPER')
8. Create the directories:
a) From one node only, create the ASM path for PEPPER database:
[oracle@lnx01] export ORACLE_SID=+ASM1; . oraenv
[oracle@lnx01] asmcmd mkdir +DG_DATA/PEPPER
[oracle@lnx01] asmcmd mkdir +DG_DATA/PEPPER/brsadata
b) From one node only, create the OS path for PEPPER admin directories:
[oracle@lnx01] mkdir -p /app/oracle/admin/PEPPER/adump
[oracle@lnx01] mkdir -p /app/oracle/diag/rdbms/PEPPER/PEPPER1/cdump
[oracle@lnx01] mkdir -p /app/oracle/diag/rdbms/PEPPER/PEPPER1/trace
c) Repeat step b on node 2 and other nodes in RAC:
[oracle@lnx02] mkdir -p /app/oracle/admin/PEPPER/adump
[oracle@lnx02] mkdir -p /app/oracle/diag/rdbms/PEPPER/PEPPER2/cdump
[oracle@lnx02] mkdir -p /app/oracle/diag/rdbms/PEPPER/PEPPER2/trace
9. Create a TNS entry for target auxiliary database and place in the tnsnames.ora. Add these entries on all nodes in cluster:
PEPPER =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01-vip)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx02-vip)(PORT = 1521))
 (CONNECT_DATA =
 (SERVICE_NAME = PEPPER)
 )
 )

LISTENERS_PEPPER =
 (ADDRESS_LIST =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01-vip)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx02-vip)(PORT = 1521))
 )

LISTENER_PEPPER1 =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01-vip)(PORT = 1521))

LISTENER_PEPPER2 =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx02-vip)(PORT = 1521))
10. If no listener.ora exists, create one as follows
# Node 1
 LISTENER_lnx01 =
 (DESCRIPTION_LIST =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01-vip)(PORT = 1521)(IP = FIRST))
 (ADDRESS = (PROTOCOL = TCP)(HOST = 207.23.2.122)(PORT = 1521)(IP = FIRST))
 )
 )
# Node 2
 LISTENER_lnx02 =
 (DESCRIPTION_LIST =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx02-vip)(PORT = 1521)(IP = FIRST))
 (ADDRESS = (PROTOCOL = TCP)(HOST = 207.23.2.126)(PORT = 1521)(IP = FIRST))
 )
 )
11. Create an SPFILE from the parameter file, and start the auxiliary instance:
[oracle@lnx01] export ORACLE_HOME=/app/oracle/product/11.1
[oracle@lnx01] export ORACLE_SID=PEPPER
SQL> CONNECT / as sysdba
SQL> CREATE SPFILE='+DG_DATA/PEPPER/spfilePEPPER.ora'
FROM PFILE='/app/oracle/scripts/clone/initPEPPER.ora';
SQL> STARTUP NOMOUNT;
12. Then run the following RMAN commands to duplicate the database:
[oracle@lnx01] rman target sysdba/<password>@SALT auxiliary /
RMAN> DUPLICATE TARGET DATABASE TO 'PEPPER';
14. Add the entry for oratab file (located under /etc or /var/opt/oracle) for target auxiliary db:
a) Add in For node 1 oratab file:
PEPPER1:/app/oracle/product/11.1:N
b) Add in For node 2 oratab file:
PEPPER2:/app/oracle/product/11.1:N
15. Enable the cluster parameter in the target auxiliary database from one node only
[oracle@lnx01] export ORACLE_HOME=/app/oracle/product/11.1
[oracle@lnx01] export ORACLE_SID=PEPPER
[oracle@lnx01] sqlplus / as sysdba
SQL> alter system set cluster_database=TRUE scope=spfile;
SQL> shutdown immediate;
16. Rename the init file and then copy the init file across to the
[oracle@lnx01] export ORACLE_SID=PEPPER1; . oraenv
[oracle@lnx01] mv $ORACLE_HOME/dbs/initPEPPER.ora $ORACLE_HOME/dbs/initPEPPER1.ora
[oracle@lnx01] scp $ORACLE_HOME/dbs/initPEPPER1.ora \
lnx02:/app/oracle/product/11.1/dbs/initPEPPER2.ora
17. Add the second thread of online redo logs and enable that thread;
SQL> startup
18. Start up the second instance:
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 5 '+DG_DATA' SIZE 250M reuse;
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+DG_REDO' TO GROUP 5 ;
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 6  '+DG_REDO' SIZE 250M reuse;
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+DG_DATA' TO GROUP 6;
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 7 '+DG_REDO' SIZE 250M reuse;
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+DG_DATA' TO GROUP 7;
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 8 '+DG_REDO' SIZE 250M reuse;
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+DG_DATA' TO GROUP 8;
SQL> ALTER DATABASE ENABLE PUBLIC THREAD 2;
19. Stop all instances on each node for the target auxiliary database
a) From node 1 perform the following:
[oracle@lnx01] export ORACLE_SID=PEPPER1; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> shutdown immediate;
b) Then perform for other instances in the cluster:
[oracle@lnx01] export ORACLE_SID=PEPPER2; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> shutdown immediate;
20. To add into the cluster, add the new database to srvctl:
[oracle@lnx01] srvctl add database -d PEPPER -o /app/oracle/product/11.1
[oracle@lnx01] srvctl modify database -d PEPPER -p '+DG_DATA/PEPPER/spfilePEPPER.ora' -s open
[oracle@lnx01] srvctl add instance -d PEPPER -i PEPPER1 -n lnx01
[oracle@lnx01] srvctl add instance -d PEPPER -i PEPPER2 -n lnx02
20. Start the RAC database
[oracle@lnx01] srvctl start database -d PEPPER
21. Now verify that the RAC database has now registered with the cluster.
For pre-11.2 run:
crs_stat –t
Or for 11.2 run:
crsctl stat res -t