Sunday, 8 September 2013
EM 12c and Complex Networks @ Oracle Open World 2013
I have known Tony Kirn for over 8 years. Not only we do share a common interest in Oracle Databases but we are also great buddies.
Yes my opinion may be considered bias, but he is highly regarded throughout the DBA community within Australia.
Tony has been working in IT since 1996 and has worked from the ground up from running cables and tinkering with hardware until finding his real passion for Oracle databases.
With his expansive experience across computing telephony, banking and finance, government industry he has shown appreciation to understand all the layers of which databases are rests upon. This has allowed him to make some of the most critical and innovative decisions throughout his career as a database professional.
He has been involved in the first successful deployment of Enterprise Manager 10g in Australia at Telstra and recently was the tech lead responsible for delivering Enterprise Manager 12c. He will be presenting at Oracle Open World this year.
So if you are attending I strongly recommend you to attend as he will provide some invaluable advice on his experience on deploying Enterprise Manager in complex networks.
The Oracle Open World Session Details are:
Session ID: CON3991
Session Title: Deploying Oracle Enterprise Manager 12c in a Complex Network Without Compromising Security
Date and Time: 24th September 2013, 17:15 - 18:15
Location: Moscone Center, San Francisco USA
Venue / Room: Moscone South - Room 300
Labels:
12c,
2013,
australia,
dab,
databases,
kirn,
melbourne,
network,
OEM,
oracle open world,
presentation,
san francisco,
security,
session,
tony
Friday, 21 June 2013
DBCA for RAC returns ORA-15025: could not open disk
An attempt to create a RAC database, the ASM fails when wrong group ownership is set. It is notable that this issue may also impact standalone database running on ASM. This issue is likely to occur if you are running different OS user for the Grid Infrastructure Software to your Database Software.
Summary:
2. OS user for Database software is "oracle":
NOTE: The "asmdba" OS group is assigned to both OS users "grid" and "oracle".
3. As oracle OS user execute "dbca" to create the database:
Method 1 - DBCA (interactive mode):
Method 2 - run DBCA (silent mode):
DBCA reports the following error when attempting to create a database:
1. Verify oradism permission has the correct setuid under the ${GRID_HOME}/bin directory:
2. Verify that the oracle permissions (6751) are correct under the ${GRID_HOME}/bin directory:
3. Verify that the oracle permissions (6751) are correct under the ${DB_HOME}/bin directory:
NOTE: If permission is not correct then run as grid OS user under ${GRID_HOME}:
4. Change group ownership to asmadmin for oracle binary under ${DB_HOME}:
5. Verify that the correct permission and ownership is fixed under ${DB_HOME}:
6. Verify the SS_ASM_GRP setting in config.c file under ${GRID_HOME}/rdbms/lib:
6. Verify the SS_ASM_GRP setting in config.c file under ${DB_HOME}/rdbms/lib:
7. Edit the file config.c file under ${GRID_HOME}/rdbms/lib and ${DB_HOME}/rdbms/lib and set the following:
8. Re-verify that the correct value is now set for SS_ASM_GRP under ${GRID_HOME} and ${DB_HOME}:
9. Relink Grid software as grid OS user using sudo:
NOTE: Check log for errors: /u01/app/11.2.0.3/grid/install/relink.log
10. Make sure this is correct on all nodes in the cluster. If not correct this issue using the steps above.
11. And then re-run "dbca"
Summary:
Configuration | Grid Infrastructure (GRID) | Database (DB) |
Software Owner (OS User) | grid | oracle |
Primary OS Group | oinstall | oinstall |
Admin Group | asmadmin | dba |
ASM Group | asmdba | asmdba |
ORACLE_HOME | /u01/app/11.2.0.3/grid | /u01/app/11.2.0.3/db |
A. Steps to Reproduce Issue
1. OS user for Grid software is "grid":
grid@lnx01:[GRID]$ id -a
uid=54320(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54324(asmdba),54325(asmadmin),54326(asmoper)
2. OS user for Database software is "oracle":
oracle@lnx01:[DB]$ id -a
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(asmdba)
NOTE: The "asmdba" OS group is assigned to both OS users "grid" and "oracle".
3. As oracle OS user execute "dbca" to create the database:
Method 1 - DBCA (interactive mode):
oracle@lnx01:[DB]$ dbca
Method 2 - run DBCA (silent mode):
oracle@lnx01:[DB]$ dbca -silent -responsefile dbca11203.rsp
B. Error Reported
2013-05-02 17:59:22.197000 +10:00
NOTE: Loaded library: System
ORA-15025: could not open disk "/dev/oracleasm/disks/DATA4"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
ORA-15025: could not open disk "/dev/oracleasm/disks/DATA2"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
ORA-15025: could not open disk "/dev/oracleasm/disks/DATA3"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
ORA-15025: could not open disk "/dev/oracleasm/disks/DATA1"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
SUCCESS: diskgroup DATA was mounted
Errors in file /u01/app/diag/rdbms/blue/BLUE/trace/BLUE_ora_29895.trc
(incident=1401):
ORA-00600: internal error code, arguments: [kfioTranslateIO03], [], [], [],
[], [], [], [], [], [], [], []
Incident details in:
/u01/app/diag/rdbms/blue/BLUE/incident/incdir_1401/BLUE_ora_29895_i1401.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2013-05-02 17:59:24.033000 +10:00
Dumping diagnostic data in directory=[cdmp_20130502175924], requested by
(instance=1, osid=29895), summary=[incident=1401].
Errors in file /u01/app/diag/rdbms/blue/BLUE/trace/BLUE_ora_29895.trc
(incident=1402):
ORA-00600: internal error code, arguments: [17090], [], [], [], [], [], [],
[], [], [], [], []
Incident details in:
/u01/app/diag/rdbms/blue/BLUE/incident/incdir_1402/BLUE_ora_29895_i1402.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2013-05-02 17:59:25.037000 +10:00
ERROR: unrecoverable error ORA-600 raised in ASM I/O path; terminating process
29895
Dumping diagnostic data in directory=[cdmp_20130502175925], requested by
(instance=1, osid=29895), summary=[incident=1402].
Shutting down instance (abort)
License high water mark = 2
USER (ospid: 29941): terminating the instance
Instance terminated by USER, pid = 29941
Instance shutdown complete
C. Solution
1. Verify oradism permission has the correct setuid under the ${GRID_HOME}/bin directory:
grid@lnx01:[GRID]$ ls -lrt /u01/app/11.2.0.3/grid/bin/oradism
-rwsr-x--- 1 root oinstall 71758 Sep 17 2011 /u01/app/11.2.0.3/grid/bin/oradism
2. Verify that the oracle permissions (6751) are correct under the ${GRID_HOME}/bin directory:
grid@lnx01:[GRID]$ ls -l /u01/app/11.2.0.3/grid/bin/oracle
-rwsr-s--x 1 grid oinstall 204090154 May 2 13:34 /u01/app/11.2.0.3/grid/bin/oracle
3. Verify that the oracle permissions (6751) are correct under the ${DB_HOME}/bin directory:
grid@lnx01:[GRID]$ ls -l /u01/app/11.2.0.3/db/bin/oracle
-rwsr-s--x 1 oracle oinstall 221309039 May 2 13:49 /u01/app/11.2.0.3/db/bin/oracle
NOTE: If permission is not correct then run as grid OS user under ${GRID_HOME}:
grid@lnx01:[GRID]$ /u01/app/11.2.0.3/grid/bin/setasmgidwrap o=/u01/app/11.2.0.3/db/bin/oracle
4. Change group ownership to asmadmin for oracle binary under ${DB_HOME}:
grid@lnx01:[GRID]$ chgrp asmadmin /u01/app/11.2.0.3/db/bin/oracle
5. Verify that the correct permission and ownership is fixed under ${DB_HOME}:
grid@lnx01:[GRID]$ ls -rlt /u01/app/11.2.0.3/db/bin/oracle
-rwxr-x--x 1 oracle asmadmin 221309039 May 2 13:49 /u01/app/11.2.0.3/db/bin/oracle
6. Verify the SS_ASM_GRP setting in config.c file under ${GRID_HOME}/rdbms/lib:
grid@lnx01:[GRID]$ cat /u01/app/11.2.0.3/grid/rdbms/lib/config.c|grep SS_ASM_GRP
#define SS_ASM_GRP "oinstall"
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP, SS_ASM_GRP};
6. Verify the SS_ASM_GRP setting in config.c file under ${DB_HOME}/rdbms/lib:
grid@lnx01:[GRID]$ cat /u01/app/11.2.0.3/db/rdbms/lib/config.c|grep SS_ASM_GRP
#define SS_ASM_GRP ""
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP, SS_ASM_GRP};
7. Edit the file config.c file under ${GRID_HOME}/rdbms/lib and ${DB_HOME}/rdbms/lib and set the following:
SS_ASM_GRP "asmdba"
8. Re-verify that the correct value is now set for SS_ASM_GRP under ${GRID_HOME} and ${DB_HOME}:
grid@lnx01:[GRID]$ cat ${GRID_HOME}/rdbms/lib/config.c|grep SS_ASM_GRP
#define SS_ASM_GRP "asmdba"
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP, SS_ASM_GRP};
oracle@lnx01:[DB]$ cat ${DB_HOME}/rdbms/lib/config.c|grep SS_ASM_GRP
#define SS_ASM_GRP "asmdba"
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP, SS_ASM_GRP};
9. Relink Grid software as grid OS user using sudo:
grid@lnx01:[GRID]$ sudo /u01/app/11.2.0.3/grid/crs/install/rootcrs.pl -unlock
grid@lnx01:[GRID]$ relink all
sudo /u01/app/11.2.0.3/grid/crs/install/rootcrs.pl -patch
NOTE: Check log for errors: /u01/app/11.2.0.3/grid/install/relink.log
10. Make sure this is correct on all nodes in the cluster. If not correct this issue using the steps above.
11. And then re-run "dbca"
Labels:
11g,
ASM,
asmadmin,
asmdba,
config.c,
dbca,
Grid Infrastructure,
installation,
oracle,
ownership,
permission,
RAC,
relink,
rootcrs.pl,
software,
sudo,
troubleshoot
Sunday, 16 June 2013
Unable to Create ASM Disk Group for OCR and Voting Disk
When installing the Grid Infrastructure (GI) software for a 2 node RAC the other day, I came across an issue where the cluster was not able to create the ASM disk group for OCR and Voting Disk as it was not able to find the appropraite ASM disks.
Configuration Setting:
The following Grid Infrastructure (GI) ORACLE_HOME is:
GI_HOME=/u01/app/oracle/11.2.0.3/grid
The issue occured when executing the root.sh with super user equivalent privileges:
The following output was reported:
Disk Group DG_CCF creation failed with the following message:
This error is also found in the log:
${GI_HOME}/install/root_<HOST>_<TIMESTAMP>.log
1. Verify partitions are visible and has the appropriate permission and ownership. If not request your system administrator to correct issue:
3. Edit the ${GI_HOME}/crs/install/crsconfig_params file and correct the path for ASM disks.
Example:
NOTE: If using oracleasm specify the /dev/oracleasm/disks path to disks. Otherwise specify the path to psuedo device name. Please check with your system administrator for device name allocated for ASM disks.
4. Re-execute the root.sh script as super equivalent user.
5. The execution of the root.sh script should now complete successfully.
Configuration Setting:
The following Grid Infrastructure (GI) ORACLE_HOME is:
GI_HOME=/u01/app/oracle/11.2.0.3/grid
A. Steps to Reproduce Issue
The issue occured when executing the root.sh with super user equivalent privileges:
oracle@lnx01:[GRID]$ sudo /u01/app/oracle/11.2.0.3/grid/root.sh
B. Error Reported
The following output was reported:
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/oracle/11.2.0.3/grid
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/11.2.0.3/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lnx01'
CRS-2676: Start of 'ora.cssdmonitor' on 'lnx01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'lnx01'
CRS-2672: Attempting to start 'ora.diskmon' on 'lnx01'
CRS-2676: Start of 'ora.diskmon' on 'lnx01' succeeded
CRS-2676: Start of 'ora.cssd' on 'lnx01' succeeded
Disk Group DG_CCF creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/dev/asm_ccf3' matches no disks
ORA-15031: disk specification '/dev/asm_ccf2' matches no disks
ORA-15031: disk specification '/dev/asm_ccf1' matches no disks
This error is also found in the log:
${GI_HOME}/install/root_<HOST>_<TIMESTAMP>.log
C. Solution
1. Verify partitions are visible and has the appropriate permission and ownership. If not request your system administrator to correct issue:
oracle@lnx01:[GRID]$ ls -l /dev/sd*1
NOTE: These devices may have a different name to /dev/sd*1, please check with your system administrator.
2. If running on Linux platform, mark asm disks using oracleasm (ie oracle asmlib is installed). Otherwise this step can be skipped.
2. If running on Linux platform, mark asm disks using oracleasm (ie oracle asmlib is installed). Otherwise this step can be skipped.
oracle@lnx01:[GRID]$ oracleasm createdisk CCF1 /dev/sdb1
oracle@lnx01:[GRID]$ oracleasm createdisk CCF2 /dev/sdc1
oracle@lnx01:[GRID]$ oracleasm createdisk CCF3 /dev/sdd1
3. Edit the ${GI_HOME}/crs/install/crsconfig_params file and correct the path for ASM disks.
Example:
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/CCF1,/dev/oracleasm/disks/CCF2,/dev/oracleasm/disks/CCF3
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*
NOTE: If using oracleasm specify the /dev/oracleasm/disks path to disks. Otherwise specify the path to psuedo device name. Please check with your system administrator for device name allocated for ASM disks.
oracle@lnx01:[GRID]$ sudo /u01/app/oracle/11.2.0.3/grid/root.sh
5. The execution of the root.sh script should now complete successfully.
Labels:
11g,
ASM,
Grid Infrastructure,
installation,
OCR,
ORA,
oracle,
oracleasm,
Quorum,
RAC,
root.sh,
sudo,
troubleshoot,
Voting Disk
Thursday, 30 May 2013
Creating Multiple TNS Aliases using a Response file
I often here DBAs run into TNS issues rambling on about:
Solution A:
You can use the netca wizard, if you prefer to click away, however you need to ensure that you have X session enabled.
Solution B: (I prefer this option!)
A quicker and easy way of creating multiple entries for TNS is to use the netca response file with silent mode.
You can use the netca response file to create listener(s) as well but in this example, we will concentrate on looking at creating TNS aliases (or NET service) using a response file. This is not a new feature and has been available in previous versions of Oracle. 1. I want to create the following TNS alias (or NET service) entries for the following:
2. Take note of the following parameters
“I can’t believe there was a missing bracket in the TNS entry”or
“The syntax was incorrect in the TNS”.Yes I admit, I have been one of those DBAs who has hacked away at the tnsnames.ora and has faced this issue in past. Although not widely known or used, there are solutions to this issue.
Solution A:
You can use the netca wizard, if you prefer to click away, however you need to ensure that you have X session enabled.
Solution B: (I prefer this option!)
A quicker and easy way of creating multiple entries for TNS is to use the netca response file with silent mode.
You can use the netca response file to create listener(s) as well but in this example, we will concentrate on looking at creating TNS aliases (or NET service) using a response file. This is not a new feature and has been available in previous versions of Oracle. 1. I want to create the following TNS alias (or NET service) entries for the following:
TNS Alias | Net Service | Protocol | Hostname | Port |
BLUE | BLUE | TCP | lnx121 | 1521 |
GREEN | GREEN | TCP | lnx122 | 1521 |
2. Take note of the following parameters
NSN_NUMBER – This is the number TNS alias(es) you want to create
NSN_NAMES – List all the TNS alias(es) names
NSN_SERVICE – List all the TNS service(s) associated with the TNS alias(es)
NSN_PROTOCOLS – List the protocol,host and port for each TNS alias
NSN_NAMES – List all the TNS alias(es) names
NSN_SERVICE – List all the TNS service(s) associated with the TNS alias(es)
NSN_PROTOCOLS – List the protocol,host and port for each TNS alias
3. The following response file I will create is called 11203_netca.rsp and I have configured it as below:
4. Now execute the netca utility in silent mode using the response file (eg. 11203_netca.rsp)
##################### 11203_netca.rsp ######################
[GENERAL]
RESPONSEFILE_VERSION="11.2"
CREATE_TYPE="CUSTOM"
[oracle.net.ca]
INSTALL_TYPE=""custom""
NAMING_METHODS={"TNSNAMES","ONAMES","HOSTNAME"}
#--------------- Modify settings in this section ---------------
NSN_NUMBER=2
NSN_NAMES={"BLUE","GREEN"}
NSN_SERVICE={"BLUE","GREEN"}
NSN_PROTOCOLS={"TCP;lnx121;1521","TCP;lnx122;1521"}
#---------------------------------------------------------------
#######################################################
4. Now execute the netca utility in silent mode using the response file (eg. 11203_netca.rsp)
oracle@lnx121:[BLUE]$ netca /silent /responseFile /home/oracle/11203_netca.rsp
Output:
Parsing command line arguments:
Parameter "silent" = true
Parameter "responsefile" = /home/oracle/11203_netca.rsp
Done parsing command line arguments.
Oracle Net Services Configuration:
Warning: Oracle Restart is not running.
Listener configuration will not be registered/unregistered with Oracle Restart.
Default local naming configuration complete.
Created net service name: BLUE
Default local naming configuration complete.
Created net service name: GREEN
Profile configuration complete.
Oracle Net Services configuration successful. The exit code is 0
5. Verify that the TNS aliases for BLUE and GREEN is now created in tnsnames.ora:
oracle@lnx121:[BLUE]$ cat $TNS_ADMIN/tnsnames.ora
Output:
# tnsnames.ora Network Configuration File: /u01/app/oracle/11.2.0.3/db/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
BLUE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = lnx121)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = BLUE)
)
)
GREEN =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = lnx122)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = GREEN)
)
)
6. Perform a tnsping for each TNS alias, to confirm entry is working.
a) TNS alias - BLUE
oracle@lnx121:[BLUE]$ tnsping BLUE
Output:
TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 29-MAY-2013 11:13:30
Copyright (c) 1997, 2011, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/11.2.0.3/db/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL =TCP)(HOST = lnx121)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME =BLUE)))
OK (0 msec)
a) TNS alias - GREEN
oracle@lnx121:[BLUE]$ tnsping GREEN
Output:
TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 29-MAY-2013 11:14:21
Copyright (c) 1997, 2011, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/11.2.0.3/db/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL =TCP)(HOST = lnx122)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME =GREEN)))
OK (0 msec)
7. This now completes the TNS alias configuration.
Wednesday, 10 April 2013
Sessions and Events in the Database
Active Sessions
When you want to do a verification of current activity in the database, the following sos_sessions_active.sql script will list information on active sessions, blockers, associated events, impacted objects, users, program and services that are currently running across all instance(s) in the database (i.e. this script can run against a single instance or RAC database).An example of this report can be found here:
(Click on image to zoom in)
Historical Sessions
At times, identifying performance issues in real time can be challenging. By the time the DBA is called upon to investigate the issue disappears. Rather then waiting for the issue to resurface, with Active Session History (ASH) introduced in 10g, the feature has become invaluable source to allow DBAs to perform post analysis and of sessions captured in the past.Using the sos_sessions_history.sql script, by specifying the time period you wish to review, the report will return details on sessions and associated events.
NOTE: The columns “machine” and “sql_opname” is not available to be reported against DBA_HIST_ACTIVE_SESS_HISTORY in version 10g and 11.1. So the sos_sessions_history.sql script will be required to edited to remove these columns from the query is executing against versions earlier then 11.2.
A snippet from a sample output can be found here: (Click on image to zoom in)
Wednesday, 7 December 2011
ASMCMD Wrapper ... For the Frequent ASM Administrator
In the pre-11g days of ASM, I was hesitant to use ASMCMD (command line utility to perform ASM tasks)... perhaps in its earlier days, it was a combination of limited functionality and my comfort in connecting to the ASM instance via sqlplus to perform administrative tasks.
But now in 11g, I have grown the desire to use ASMCMD and have found it to be more useful then I have first envisaged. In 11g, some of the new functionality now offered through ASMCMD includes the ability to perform: startup and shutdown of ASM instance backup (md_backup) and restore (md_restore) of asm metadata list open files (lsof) verify disk usage (du) change/alter (chdg) disks in disk groups. For a comprehensive list of ASMCMD commands, you can refer to Oracle 11.2 Database Storage Administrator's Guide.
With the more frequent use of ASMCMD, I got tired of switching between the ASM environment and DB environment settings, as a result, I have created an ASM wrapper script which calls asmcmd (ie. works when ASM is mounted or running) with the ability to preserve your DB environment setting.
Example of using this script are provided below:
NOTE: Change to the directory to where the script sos_asm.sh was downloaded to and set executable permission accordingly
Example 1:
Using the sos_asm.sh script, I performed an ASM metadata backup using asmcmd but also was able to preserve database environment settings for the database called BLUE.
But now in 11g, I have grown the desire to use ASMCMD and have found it to be more useful then I have first envisaged. In 11g, some of the new functionality now offered through ASMCMD includes the ability to perform: startup and shutdown of ASM instance backup (md_backup) and restore (md_restore) of asm metadata list open files (lsof) verify disk usage (du) change/alter (chdg) disks in disk groups. For a comprehensive list of ASMCMD commands, you can refer to Oracle 11.2 Database Storage Administrator's Guide.
With the more frequent use of ASMCMD, I got tired of switching between the ASM environment and DB environment settings, as a result, I have created an ASM wrapper script which calls asmcmd (ie. works when ASM is mounted or running) with the ability to preserve your DB environment setting.
Example of using this script are provided below:
NOTE: Change to the directory to where the script sos_asm.sh was downloaded to and set executable permission accordingly
[oracle@lnx01] cd /home/oracle/scripts
[oracle@lnx01] chmod +x sos_asm.sh
Using the sos_asm.sh script, I performed an ASM metadata backup using asmcmd but also was able to preserve database environment settings for the database called BLUE.
[oracle@lnx01] export ORACLE_SID=BLUE; . oraenv
[oracle@lnx01] cd /home/oracle/scripts
[oracle@lnx01] ./sos_asm.sh md_backup /home/oracle/asm_backup/dgbk_DG_DATA -G DG_DATA
[oracle@lnx01] sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 20 23:09:31 2011
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select sys_context('USERENV','DB_NAME') DB from dual;
DB
-----------
BLUE
Tuesday, 5 July 2011
Unable to Run Opatch Cleanup on Grid Infrastructure Home
In an environment where you have the Grid Infrastructure software installed (ie for RAC or ASM configurations), it is important to remember to unlock the software home before performing activities such as cleaning up the OPatch storage.
Databases deployed with no Grid Infrastructure software will not have this issue.
The following command was executed when attempting to clean up the patch storage:
The following output error was reported below:
1. Unlock the Grid Infrastructure Home:
2. Perform the OPatch storage cleanup:
3. Lock the Grid Infrastructure Home:
Databases deployed with no Grid Infrastructure software will not have this issue.
A. Steps to Reproduce Issue
The following command was executed when attempting to clean up the patch storage:
oracle@lnx01:[GRID]$ export ORACLE_SID=GRID; . oraenv
oracle@lnx01:[GRID]$ $ORACLE_HOME/OPatch/opatch util cleanup
B. Error Reported
The following output error was reported below:
Invoking OPatch 11.2.0.1.5
Oracle Interim Patch Installer version 11.2.0.1.5
Copyright (c) 2010, Oracle Corporation. All rights reserved.
UTIL session
Oracle Home : /u01/app/oracle/11.2.0.2/grid
Central Inventory : /u01/app/oracle/oraInventory
from : /etc/oracle/oraInst.loc
OPatch version : 11.2.0.1.5
OUI version : 11.2.0.2.0
OUI location : /u01/app/oracle/11.2.0.2/grid/oui
Log file location : /u01/app/oracle/11.2.0.2/grid/cfgtoollogs/opatch/opatch2011-05-04_10-36-59AM.log
Patch history file: /u01/app/oracle/11.2.0.2/grid/cfgtoollogs/opatch/opatch_history.txt
OPatchSession cannot load inventory for the given Oracle Home /u01/app/oracle/11.2.0.2/grid. Possible causes are:
No read or write permission to ORACLE_HOME/.patch_storage
Central Inventory is locked by another OUI instance
No read permission to Central Inventory
The lock file exists in ORACLE_HOME/.patch_storage
The Oracle Home does not exist in Central Inventory
UtilSession failed: Locker::lock() mkdir /u01/app/oracle/11.2.0.2/grid/.patch_storage
C. Solution
1. Unlock the Grid Infrastructure Home:
oracle@lnx01:[GRID]$ export ORACLE_SID=GRID; . oraenv
oracle@lnx01:[GRID]$ sudo $ORACLE_HOME/crs/install/rootcrs.pl -unlock
2. Perform the OPatch storage cleanup:
oracle@lnx01:[GRID]$ $ORACLE_HOME/OPatch/opatch util cleanup
3. Lock the Grid Infrastructure Home:
oracle@lnx01:[GRID]$ sudo $ORACLE_HOME/crs/install/rootcrs.pl -patch
Labels:
ASM,
cleanup,
Grid Infrastructure,
lock,
OPatch,
RAC,
rootcrs.pl,
software,
troubleshoot,
unlock
Thursday, 9 June 2011
Viewing Realtime Alert Log via ADRCI
ADR (Automatic Diagnostic Repository) was introduced in 11g and is a datastore for diagnostic information for oracle components such as database, asm, listener and scan listeners. It stores trace files, logs in both clear text and xml format.
ADR does allow the capability to see alert (and listener) logs in realtime.
For example if I wanted to view in realtime the alert log for Database called RED, I would do the following:
If you have multiple ADR homes and wish to quickly switch between alert or listener logs, I have created a script which can be downloaded called sos_alert.sh.
Example of using this script are provided below, but first change to directory to where sos_alert.sh was downloaded to and set executable permission accordingly:
Example 1: To view alert log for the database named RED
For example if I wanted to view in realtime the alert log for Database called RED, I would do the following:
[oracle@lnx01] adrci
adrci> set home RED
adrci> show alert -tail -f
[oracle@lnx01] cd /home/oracle/scripts
[oracle@lnx01] chmod +x sos_alert.sh
[oracle@lnx01] ./sos_alert.sh RED
Example 2: To view alert log for ASM
[oracle@lnx01] ./sos_alert.sh +ASM
Example 3: To view listener log for listener named “LISTENER”
[oracle@lnx01] ./sos_alert.sh LISTENER
Example 4: To view listener log for scan listener named “LISTENER_SCAN1″
[oracle@lnx01] ./sos_alert.sh LISTENER_SCAN1
Wednesday, 4 May 2011
OPatch Friendly
OPatch utility now has just got a little fancier and more user friendly.
Download latest OPatch (Patch 6880880) for the Oracle software release (eg. 10.2, 11.1, 11.2) from My Oracle Support and extract to the appropriate Oracle software directory.
NOTE: The OPatch utility can be applied to Database, Grid Control (OMS and Agent), Grid Infrastructure (ASM and CRS) software home directories.
In the below example the database software release is 11.2 and runs on Linux 64 bit platform, so to update the OPatch utility, the patch file 6880880_112000_Linux-x86-64.zip was downloaded.
Below are just a few examples of what you can now do with the more recent versions of OPatch.
1. Verify the version of OPatch currently installed, simply run:
Syntax:
Example:
2. Backup old OPatch utility:
2. Extract and install latest OPatch to Oracle Home directory:
Syntax:
Example:
4. Re-run step 1 from this section to verify new version of OPatch.
The OPatch utility can be used to verify details of a patch before proceeding to apply to the Oracle Home directory. Such information that can verified but not limited to are:
• Check if patch is a Rolling Patch
• Check if patch is a Patchset Update (PSU)
• Check if patch can be run with “opatch auto” option
• Check OS platform the patch can be applied on
• Actual actions and/or steps that patch contains without applying the patch. This include detailed information of files it touches, copies and relinks.
Syntax:
Example 1:
To search for a patch based on search string, OPatch can list a detailed description of each patch applied to the Oracle home directory.
Syntax:
OPatch can now determine and cleanup files in the patch storage that are no longer required. To do so run the following:
Syntax:
Download latest OPatch (Patch 6880880) for the Oracle software release (eg. 10.2, 11.1, 11.2) from My Oracle Support and extract to the appropriate Oracle software directory.
NOTE: The OPatch utility can be applied to Database, Grid Control (OMS and Agent), Grid Infrastructure (ASM and CRS) software home directories.
In the below example the database software release is 11.2 and runs on Linux 64 bit platform, so to update the OPatch utility, the patch file 6880880_112000_Linux-x86-64.zip was downloaded.
Below are just a few examples of what you can now do with the more recent versions of OPatch.
A. To download and install latest OPatch version
1. Verify the version of OPatch currently installed, simply run:
Syntax:
export ORACLE_SID=<ORACLE_SID>; . oraenv
$ORACLE_HOME/OPatch/opatch version
[oracle@lnx01] export ORACLE_SID=112DB; . oraenv
$ORACLE_HOME/OPatch/opatch version
[oracle@lnx01] mv $ORACLE_HOME/OPatch $ORACLE_HOME/OPatch.orig
2. Extract and install latest OPatch to Oracle Home directory:
Syntax:
unzip p6880880_<RELEASE>_<OS_PLATFORM>.zip -d $ORACLE_HOME
[oracle@lnx01] unzip p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOME
4. Re-run step 1 from this section to verify new version of OPatch.
B. List Detailed Information About a Patch Before Applying It
The OPatch utility can be used to verify details of a patch before proceeding to apply to the Oracle Home directory. Such information that can verified but not limited to are:
• Check if patch is a Rolling Patch
• Check if patch is a Patchset Update (PSU)
• Check if patch can be run with “opatch auto” option
• Check OS platform the patch can be applied on
• Actual actions and/or steps that patch contains without applying the patch. This include detailed information of files it touches, copies and relinks.
Syntax:
$ORACLE_HOME/OPatch/opatch query -all |more
[oracle@lnx01] cd /home/oracle/download/patches/12311357
[oracle@lnx01] $ORACLE_HOME/OPatch/opatch query -all |more
C. Looking for a Particular Patch Already Applied based on a Bug or Patch Description
To search for a patch based on search string, OPatch can list a detailed description of each patch applied to the Oracle home directory.
Syntax:
$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed |grep –i "<string>"
This is particularly handy if you want to look for a Patchset Update (PSU) or Daylights Savings Time (DST) patch.
Example 1:
[oracle@lnx01] $ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed |grep –i “DATABASE PSU”
Example 2:
[oracle@lnx01] $ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed |grep –i “DST”
D. Cleanup Patch Storage to Reclaim Space
OPatch can now determine and cleanup files in the patch storage that are no longer required. To do so run the following:
Syntax:
[oracle@lnx01] $ORACLE_HOME/OPatch/opatch util cleanup
Cluster Resource Control Filters
In 11.2 RAC, to avoid listing all cluster resources (via CRSCTL), you can use the filter switch option “-w”. This can be useful if you only want to look at a specific status of a given resource.
In the example, shows the listing of status for all databases managed by the cluster.
The output for the above example is as follows:
A list below are some type of resources that the filter can set for:
NOTE: The filter from the example can be change from “ora.database.type” to “ora.service.type”. For more options see listing below.
Sample output:
Example:
Example:
NOTE: You can use “-init” to check Local Resources
Sample Output:
When I run the “crsctl stat res -t” and it print lines and lines that go on forever of all the resources for your cluster, I find that at times you can easily oversee resources that are in a pending or bad status that need special attention. Don’t get me wrong, it is nice to know about all your resources. However, there is a better way that you can do a status check, and report only the pending or bad resources.
Note that the “crsctl stat res -t -w “((TARGET = OFFLINE) or (STATE = OFFLINE)” will only report resources with the OFFLINE target or state. It will not report other target or state information.
Hence I will show you below in an example on how to capture resources with pending (starting, intermediate) or bad (offline, unknown etc…) status. Therefore I want to report all resources with the target or state that is not ONLINE.
In addition, I want to also exclude that annoying gsd (ora.gsd) resource from the check, as we are not hosting any oracle 9i RAC database on this cluster, therefore the target and state for this resource will always show as being OFFLINE.
NOTE: No output is shown if cluster resources are up and running. Otherwise impacted resources will be displayed.
A. List for All Databases Managed by Cluster
In the example, shows the listing of status for all databases managed by the cluster.
[oracle@lnx01] export ORACLE_SID=GRID; . oraenv
[oracle@lnx01] crsctl stat res -t -w "TYPE = ora.database.type"
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.salt.db
1 ONLINE ONLINE lnx01 Open
2 ONLINE ONLINE lnx02 Open
ora.pepper.db
1 ONLINE ONLINE lnx01 Open
2 ONLINE ONLINE lnx02 Open
NOTE: The filter from the example can be change from “ora.database.type” to “ora.service.type”. For more options see listing below.
[oracle@lnx01] crsctl stat res |grep "TYPE=" |uniq
TYPE=ora.diskgroup.type
TYPE=ora.listener.type
TYPE=ora.scan_listener.type
TYPE=ora.diskgroup.type
TYPE=application
TYPE=ora.asm.type
TYPE=ora.cluster_vip_net1.type
TYPE=ora.cvu.type
TYPE=ora.database.type
TYPE=ora.service.type
TYPE=ora.database.type
TYPE=ora.service.type
TYPE=ora.gsd.type
TYPE=ora.network.type
TYPE=ora.oc4j.type
TYPE=ora.ons.type
TYPE=ora.scan_vip.type
TYPE=ora.database.type
B. Check Cluster Resources that are offline
Example:
[oracle@lnx01] crsctl stat res -t -w "STATE = OFFLINE"
NOTE: If not using GSD, then it is ok for the following resource to be offline.
Sample Output:
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.gsd
OFFLINE OFFLINE lnx01
OFFLINE OFFLINE lxn02
C. Verifying local resources that are online
Example:
[oracle@lnx01] crsctl stat res -init -t -w "STATE = ONLINE"
NOTE: You can use “-init” to check Local Resources
Sample Output:
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE ONLINE lnx01 Started
ora.cluster_interconnect.haip
1 ONLINE ONLINE lnx01
ora.crf
1 ONLINE ONLINE lnx01
ora.crsd
1 ONLINE ONLINE lnx01
ora.cssd
1 ONLINE ONLINE lnx01
ora.cssdmonitor
1 ONLINE ONLINE lnx01
ora.ctssd
1 ONLINE ONLINE lnx01 OBSERVER
ora.diskmon
1 ONLINE ONLINE lnx01
ora.drivers.acfs
1 ONLINE ONLINE lnx01
ora.evmd
1 ONLINE ONLINE lnx01
ora.gipcd
1 ONLINE ONLINE lnx01
ora.gpnpd
1 ONLINE ONLINE lnx01
ora.mdnsd
1 ONLINE ONLINE lnx01
D. To check for resources that are Not Online
When I run the “crsctl stat res -t” and it print lines and lines that go on forever of all the resources for your cluster, I find that at times you can easily oversee resources that are in a pending or bad status that need special attention. Don’t get me wrong, it is nice to know about all your resources. However, there is a better way that you can do a status check, and report only the pending or bad resources.
Note that the “crsctl stat res -t -w “((TARGET = OFFLINE) or (STATE = OFFLINE)” will only report resources with the OFFLINE target or state. It will not report other target or state information.
Hence I will show you below in an example on how to capture resources with pending (starting, intermediate) or bad (offline, unknown etc…) status. Therefore I want to report all resources with the target or state that is not ONLINE.
In addition, I want to also exclude that annoying gsd (ora.gsd) resource from the check, as we are not hosting any oracle 9i RAC database on this cluster, therefore the target and state for this resource will always show as being OFFLINE.
Example: [oracle@lnx01] crsctl stat res -t -w "((TARGET != ONLINE) or (STATE != ONLINE) and TYPE != ora.gsd))"
Tuesday, 12 April 2011
Opatch Fails with Patch ID is null and Error Code 73
This is an issue, I have come across in the past. This issue can impact most Oracle software homes when applying patches if the incorrect OPatch version exists.
The following command was executed when applying a patch:
The following output error was reported below:
1. Downloading patch 6880880 for the version of the software release (ie. 10.2 or 11.1, 11.2 etc…) from My Oracle Support.
The patch will be in the following format:
p6880880_<release>_<os_platform>.zip
For example in 10gR2 (10.2), the OPatch will be:
p6880880_102000_SOLARIS64.zip
2. Verify current OPatch version:
3. Backup the old OPatch
4. Update the latest patch
5. Verify OPatch version:
6. And then re-apply patch.
A. Steps to Reproduce Issue
The following command was executed when applying a patch:
oracle@lnx01:[AGENT]$ cd /u01/app/oracle/software/10170020
oracle@lnx01:[AGENT]$ $ORACLE_HOME/OPatch/opatch apply
B. Error Reported
The following output error was reported below:
Invoking OPatch 10.2.0.4.5
Oracle Interim Patch Installer version 10.2.0.4.5
Copyright (c) 2008, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/oracle/agent10g
Central Inventory : /u01/app/oracle/oraInventory
from : /etc/oracle/oraInst.loc
OPatch version : 10.2.0.4.5
OUI version : 10.2.0.5.0
OUI location : /u01/app/oracle/agent10g/oui
Log file location : /u01/app/oracle/agent10g/cfgtoollogs/opatch/opatch2011-03-15_07-55-15AM.log
Patch history file: /u01/app/oracle/agent10g/cfgtoollogs/opatch/opatch_history.txt
ApplySession failed: Patch ID is null.
System intact, OPatch will not attempt to restore the system
OPatch failed with error code 73
C. Solution
1. Downloading patch 6880880 for the version of the software release (ie. 10.2 or 11.1, 11.2 etc…) from My Oracle Support.
The patch will be in the following format:
p6880880_<release>_<os_platform>.zip
For example in 10gR2 (10.2), the OPatch will be:
p6880880_102000_SOLARIS64.zip
2. Verify current OPatch version:
oracle@lnx01:[AGENT]$ $ORACLE_HOME/OPatch/opatch version
3. Backup the old OPatch
oracle@lnx01:[AGENT]$ cd $ORACLE_HOME
oracle@lnx01:[AGENT]$ mv OPatch/ OPatch.orig
4. Update the latest patch
oracle@lnx01:[AGENT]$ cd /u01/app/oracle/software
oracle@lnx01:[AGENT]$ unzip -d $ORACLE_HOME p6880880_102000_SOLARIS64.zip
5. Verify OPatch version:
oracle@lnx01:[AGENT]$ $ORACLE_HOME/OPatch/opatch version
6. And then re-apply patch.
Friday, 11 February 2011
Oracle Backup and Recovery for a VLDB (Very Large Database)
There is an ongoing focus for businesses to minimise downtime and increase operational continuity. With such challenges, IT staff are constantly under pressure to meet such demands. For DBAs, backup and recovery is one of the key areas which can be reviewed to ensure that a faster approach to recovery can be achieved to reduce downtime.
An RMAN feature called Incremental Merge (also referred to as Incremental Updated) Backups can significantly reduce recovery time if configured correctly. Perhaps not as widely used but has been around since 10g, it is the ideal backup methodology for a VLDB (Very Large Database). How it works is that the image copies of the data files are created and incrementals are then applied rolling forward the image copies after each backup operation. It is also important to enable fast incremental backups with BCT (Block Change Tracking). For more detailed information on BCT, please refer to the Pythian Whitepaper written by Alexander Gorbachev.
The table below lists the configuration that will be used for the following demonstration:
1. To create block change tracking file, run the command in sqlplus as sysdba:
Example:
IMPORTANT NOTE: If running in a RAC configuration, the following file needs to be placed on the cluster shared file system.
2. Login to database and verify the location of data files.
1. Register database with RMAN Catalog if using Catalog mode. See example here.
2. Connect to RMAN in catalog (or no catalog) mode.
3. Configure the preferred RMAN Recovery Window. In the example, the recovery window will be 3 days.
4. It is also recommended that you enable autobackup of controlfile in RMAN.
1. Perform the backup of the database in RMAN, and this will create an image copy on the initial run otherwise it will apply the incrementals and rolls forward the image copies of data files.
2. Repeat the previous step for any future or subsequent backups.
An unexpected outage has occurred and now a recovery is required to be performed to bring the database back up quickly as possible.
1. To perform the database recovery ensure that the database in MOUNT mode.
2. In RMAN, switch database to use image copy and then perform recovery.
As the database is continuing to run from where the recovered image copies are located, there may be a desire to restore it back to its original location.
This is optional, however it is recommended if you are running image copies from FRA (Fast Recovery Area).
1. Backup the database in RMAN
NOTE: If this is the first time of running the backup to new location, it will create the image copies of the datafiles.
2. Repeat the previous step for any future or subsequent backups until a time can be scheduled to switch back.
3. When a planned time has been scheduled and switch back can be performed, shutdown database and startup in MOUNT mode.
4. Switch database to use image copy and then perform recovery. The database will now be started using image copies which is now configure in the original data file location.
5. Login to database and verify the data files are pointing to the recovered image copies back in the original location (or directory).
More storage is required to store the copy of database in addition to the traditional incremental backups.
For Point-In-Time recovery, continue to at least perform one full backup of database and include archive logs. This can be can be performed less frequently (ie weekly, fortnightly, monthly).
Validate image copies and backups to detect if there is corruption or missing files which may compromise recovery.
Image copies of database should be on same storage tier as online data files to ensure performance it is not impacted when switching to database copy for recovery.
Image copies do not have to be stored in FRA, but should be stored on disk.
An RMAN feature called Incremental Merge (also referred to as Incremental Updated) Backups can significantly reduce recovery time if configured correctly. Perhaps not as widely used but has been around since 10g, it is the ideal backup methodology for a VLDB (Very Large Database). How it works is that the image copies of the data files are created and incrementals are then applied rolling forward the image copies after each backup operation. It is also important to enable fast incremental backups with BCT (Block Change Tracking). For more detailed information on BCT, please refer to the Pythian Whitepaper written by Alexander Gorbachev.
The table below lists the configuration that will be used for the following demonstration:
A. CONFIGURE BLOCK CHANGE TRACKING
1. To create block change tracking file, run the command in sqlplus as sysdba:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE '<patch_to_block_change_file_name>';
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE '/u02/oradata/VLDB/bct_VLDB.ctf';
2. Login to database and verify the location of data files.
[oracle@lnx01] export ORACLE_SID=VLDB; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> col file_name form a75
SQL> select file_id, file_name from dba_data_files
/
FILE_ID FILE_NAME
---------- --------------------------------------------------
1 /u02/oradata/VLDB/o1_mf_system_7o8kj2gt_.dbf
2 /u02/oradata/VLDB/o1_mf_sysux_2o3mt9jl_.dbf
3 /u02/oradata/VLDB/o1_mf_undotbs_1c5gb8nh_.dbf
4 /u02/oradata/VLDB/o1_mf_users_5v5db3vv_.dbf
B. CONFIGURE RMAN PARAMETERS
1. Register database with RMAN Catalog if using Catalog mode. See example here.
2. Connect to RMAN in catalog (or no catalog) mode.
[oracle@lnx01] export ORACLE_SID=VLDB; . oraenv
[oracle@lnx01] rman target / catalog rman/@RMANCAT
RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
C. Performing the Incremental Merge (Updated) Backup
1. Perform the backup of the database in RMAN, and this will create an image copy on the initial run otherwise it will apply the incrementals and rolls forward the image copies of data files.
RMAN> run
{
sql 'alter system archive log current';
recover device type disk
copy of database with tag 'MIB_UPDATE';
backup incremental level 1
for recover of copy with tag 'MIB_UPDATE' database;
sql 'alter system archive log current';
}
list copy of database tag 'MIB_UPDATE';
list backup tag 'MIB_UPDATE';
D. Performing the Recovery from Incremental Merge (Updated) Backup
An unexpected outage has occurred and now a recovery is required to be performed to bring the database back up quickly as possible.
1. To perform the database recovery ensure that the database in MOUNT mode.
[oracle@lnx01] export ORACLE_SID=VLDB; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> shutdown abort
SQL> startup mount
RMAN> set echo on;
switch database to copy;
run {
recover database;
alter database open;
}
3. Login to database and verify the location of data files are pointing to the recovered image copies.
[oracle@lnx01] export ORACLE_SID=VLDB; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> col file_name form a75
SQL> select file_id, file_name from dba_data_files
/
FILE_ID FILE_NAME
---------- --------------------------------------------------
1 /u03/orafra/VLDB/datafile/o1_mf_system_6o7mj1gt_.dbf
2 /u03/orafra/VLDB/datafile/o1_mf_sysaux_6o7mj4j9_.dbf
3 /u03/orafra/VLDB/datafile/o1_mf_undotbs1_6o7ml4yx_.dbf
4 /u03/orafra/VLDB/datafile/o1_mf_users_6o7mlhl4_.dbf
E. Switching database files back to original data file location
This is optional, however it is recommended if you are running image copies from FRA (Fast Recovery Area).
1. Backup the database in RMAN
RMAN> run
{
ALLOCATE CHANNEL d1 TYPE DISK FORMAT '/u02/oradata/VLDB/%U.dbf';
recover
copy of database with tag 'MIB_RELOCATE';
backup incremental level 1
for recover of copy with tag 'MIB_RELOCATE' database;
release channel d1;
}
list copy of database tag 'MIB_RELOCATE';
list backup tag 'MIB_RELOCATE';
2. Repeat the previous step for any future or subsequent backups until a time can be scheduled to switch back.
3. When a planned time has been scheduled and switch back can be performed, shutdown database and startup in MOUNT mode.
[oracle@lnx01] export ORACLE_SID=VLDB; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> shutdown immediate
SQL> startup mount
RMAN> set echo on;
switch database to copy;
run {
recover database;
alter database open;
}
[oracle@lnx01] export ORACLE_SID=VLDB; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> col file_name form a75
SQL> select file_id, file_name from dba_data_files
/
FILE_ID FILE_NAME
---------- --------------------------------------------------
1 /u02/oradata/VLDB/data_D-VLDB_I-2643384069_TS-SYSTEM_FNO-1_cim4edhc.dbf
2 /u02/oradata/VLDB/data_D-VLDB_I-2643384069_TS-SYSAUX_FNO-2_cjm4edig.dbf
3 /u02/oradata/VLDB/data_D-VLDB_I-2643384069_TS-UNDOTBS1_FNO-3_ckm4edjj.dbf
4 /u02/oradata/VLDB/data_D-VLDB_I-2643384069_TS-USERS_FNO-4_clm4edk2.dbf
F. CONSIDERATIONS
In conclusion, it is worth noting that when using RMAN Incremental Merge (Updated) Backup strategy, it is important to consider the following:
Subscribe to:
Posts (Atom)