Showing posts with label troubleshoot. Show all posts
Showing posts with label troubleshoot. Show all posts

Friday, 21 June 2013

DBCA for RAC returns ORA-15025: could not open disk

An attempt to create a RAC database, the ASM fails when wrong group ownership is set.   It is notable that this issue may also impact standalone database running on ASM.  This issue is likely to occur if you are running different OS user for the Grid Infrastructure Software to your Database Software.

Summary:
Configuration Grid Infrastructure (GRID) Database (DB)
Software Owner (OS User)gridoracle
Primary OS Groupoinstalloinstall
Admin Groupasmadmindba
ASM Groupasmdbaasmdba
ORACLE_HOME /u01/app/11.2.0.3/grid/u01/app/11.2.0.3/db

A. Steps to Reproduce Issue

1.  OS user for Grid software is "grid":
grid@lnx01:[GRID]$ id -a
uid=54320(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54324(asmdba),54325(asmadmin),54326(asmoper)

2. OS user for Database software is "oracle":
oracle@lnx01:[DB]$ id -a
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(asmdba)

NOTE: The "asmdba" OS group is assigned to both OS users "grid" and "oracle".

3. As oracle OS user execute "dbca" to create the database:
Method 1 - DBCA (interactive mode):
oracle@lnx01:[DB]$  dbca
Method 2 - run DBCA (silent mode):
oracle@lnx01:[DB]$  dbca -silent -responsefile dbca11203.rsp

B. Error Reported

DBCA reports the following error when attempting to create a database:
2013-05-02 17:59:22.197000 +10:00
NOTE: Loaded library: System
ORA-15025: could not open disk "/dev/oracleasm/disks/DATA4"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
ORA-15025: could not open disk "/dev/oracleasm/disks/DATA2"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
ORA-15025: could not open disk "/dev/oracleasm/disks/DATA3"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
ORA-15025: could not open disk "/dev/oracleasm/disks/DATA1"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
SUCCESS: diskgroup DATA was mounted
Errors in file /u01/app/diag/rdbms/blue/BLUE/trace/BLUE_ora_29895.trc
(incident=1401):
ORA-00600: internal error code, arguments: [kfioTranslateIO03], [], [], [],
[], [], [], [], [], [], [], []
Incident details in:
/u01/app/diag/rdbms/blue/BLUE/incident/incdir_1401/BLUE_ora_29895_i1401.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2013-05-02 17:59:24.033000 +10:00
Dumping diagnostic data in directory=[cdmp_20130502175924], requested by
(instance=1, osid=29895), summary=[incident=1401].
Errors in file /u01/app/diag/rdbms/blue/BLUE/trace/BLUE_ora_29895.trc
(incident=1402):
ORA-00600: internal error code, arguments: [17090], [], [], [], [], [], [],
[], [], [], [], []
Incident details in:
/u01/app/diag/rdbms/blue/BLUE/incident/incdir_1402/BLUE_ora_29895_i1402.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2013-05-02 17:59:25.037000 +10:00
ERROR: unrecoverable error ORA-600 raised in ASM I/O path; terminating process
29895
Dumping diagnostic data in directory=[cdmp_20130502175925], requested by
(instance=1, osid=29895), summary=[incident=1402].
Shutting down instance (abort)
License high water mark = 2
USER (ospid: 29941): terminating the instance
Instance terminated by USER, pid = 29941
Instance shutdown complete


C. Solution


1. Verify oradism permission has the correct setuid under the ${GRID_HOME}/bin directory:
grid@lnx01:[GRID]$ ls -lrt /u01/app/11.2.0.3/grid/bin/oradism
-rwsr-x--- 1 root oinstall 71758 Sep 17  2011 /u01/app/11.2.0.3/grid/bin/oradism


2. Verify that the oracle permissions (6751) are correct under the ${GRID_HOME}/bin directory:
grid@lnx01:[GRID]$ ls -l /u01/app/11.2.0.3/grid/bin/oracle
-rwsr-s--x 1 grid oinstall 204090154 May  2 13:34 /u01/app/11.2.0.3/grid/bin/oracle


3. Verify that the oracle permissions (6751) are correct under the ${DB_HOME}/bin directory:
grid@lnx01:[GRID]$ ls -l /u01/app/11.2.0.3/db/bin/oracle
-rwsr-s--x 1 oracle oinstall 221309039 May  2 13:49 /u01/app/11.2.0.3/db/bin/oracle


NOTE: If permission is not correct then run as grid OS user under ${GRID_HOME}:
grid@lnx01:[GRID]$ /u01/app/11.2.0.3/grid/bin/setasmgidwrap o=/u01/app/11.2.0.3/db/bin/oracle

4. Change group ownership to asmadmin for oracle binary under ${DB_HOME}:
grid@lnx01:[GRID]$ chgrp asmadmin /u01/app/11.2.0.3/db/bin/oracle

5. Verify that the correct permission and ownership is fixed under ${DB_HOME}:
grid@lnx01:[GRID]$ ls -rlt /u01/app/11.2.0.3/db/bin/oracle
-rwxr-x--x 1 oracle asmadmin 221309039 May  2 13:49 /u01/app/11.2.0.3/db/bin/oracle


6. Verify the SS_ASM_GRP setting in config.c file under ${GRID_HOME}/rdbms/lib:
grid@lnx01:[GRID]$ cat /u01/app/11.2.0.3/grid/rdbms/lib/config.c|grep SS_ASM_GRP
#define SS_ASM_GRP "oinstall"
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP, SS_ASM_GRP};  


6. Verify the SS_ASM_GRP setting in config.c file under ${DB_HOME}/rdbms/lib:
grid@lnx01:[GRID]$ cat /u01/app/11.2.0.3/db/rdbms/lib/config.c|grep SS_ASM_GRP
#define SS_ASM_GRP ""
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP, SS_ASM_GRP};  


7. Edit the file config.c file under ${GRID_HOME}/rdbms/lib and ${DB_HOME}/rdbms/lib  and set the following: SS_ASM_GRP "asmdba"

8. Re-verify that the correct value is now set for SS_ASM_GRP under ${GRID_HOME} and ${DB_HOME}:
grid@lnx01:[GRID]$ cat ${GRID_HOME}/rdbms/lib/config.c|grep SS_ASM_GRP
#define SS_ASM_GRP "asmdba"
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP, SS_ASM_GRP};
oracle@lnx01:[DB]$ cat ${DB_HOME}/rdbms/lib/config.c|grep SS_ASM_GRP
#define SS_ASM_GRP "asmdba"
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP, SS_ASM_GRP};  


9. Relink Grid software as grid OS user using sudo:
grid@lnx01:[GRID]$ sudo /u01/app/11.2.0.3/grid/crs/install/rootcrs.pl -unlock
grid@lnx01:[GRID]$ relink all
sudo /u01/app/11.2.0.3/grid/crs/install/rootcrs.pl -patch

NOTE: Check log for errors: /u01/app/11.2.0.3/grid/install/relink.log


10. Make sure this is correct on all nodes in the cluster.  If not correct this issue using the steps above.

11. And then re-run "dbca"

Sunday, 16 June 2013

Unable to Create ASM Disk Group for OCR and Voting Disk

When installing the Grid Infrastructure (GI) software for a 2 node RAC the other day, I came across an issue where the cluster was not able to create the ASM disk group for OCR and Voting Disk as it was not able to find the appropraite ASM disks.

Configuration Setting:

The following Grid Infrastructure (GI) ORACLE_HOME is:
GI_HOME=/u01/app/oracle/11.2.0.3/grid

A. Steps to Reproduce Issue


The issue occured when executing the root.sh with super user equivalent privileges:
oracle@lnx01:[GRID]$ sudo /u01/app/oracle/11.2.0.3/grid/root.sh

B. Error Reported


The following output was reported:
Performing root user operation for Oracle 11g
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/oracle/11.2.0.3/grid
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/11.2.0.3/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lnx01'
CRS-2676: Start of 'ora.cssdmonitor' on 'lnx01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'lnx01'
CRS-2672: Attempting to start 'ora.diskmon' on 'lnx01'
CRS-2676: Start of 'ora.diskmon' on 'lnx01' succeeded
CRS-2676: Start of 'ora.cssd' on 'lnx01' succeeded

Disk Group DG_CCF creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/dev/asm_ccf3' matches no disks
ORA-15031: disk specification '/dev/asm_ccf2' matches no disks
ORA-15031: disk specification '/dev/asm_ccf1' matches no disks

This error is also found in the log:
${GI_HOME}/install/root_<HOST>_<TIMESTAMP>.log

C. Solution 


1. Verify partitions are visible and has the appropriate permission and ownership.  If not request your system administrator to correct issue:
oracle@lnx01:[GRID]$ ls -l /dev/sd*1

NOTE:  These devices may have a different name to /dev/sd*1, please check with your system administrator.

2. If running on Linux platform, mark asm disks using oracleasm (ie oracle asmlib is installed). Otherwise this step can be skipped.

oracle@lnx01:[GRID]$ oracleasm createdisk CCF1 /dev/sdb1
oracle@lnx01:[GRID]$ oracleasm createdisk CCF2 /dev/sdc1
oracle@lnx01:[GRID]$ oracleasm createdisk CCF3 /dev/sdd1


3. Edit the ${GI_HOME}/crs/install/crsconfig_params file and correct the path for ASM disks.

Example:
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/CCF1,/dev/oracleasm/disks/CCF2,/dev/oracleasm/disks/CCF3
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*

NOTE:  If using oracleasm specify the /dev/oracleasm/disks path to disks.  Otherwise specify the path to psuedo device name.  Please check with your system administrator for device name allocated for ASM disks.

4. Re-execute the root.sh script as super equivalent user.
oracle@lnx01:[GRID]$ sudo /u01/app/oracle/11.2.0.3/grid/root.sh
5. The execution of the root.sh script should now complete successfully.

Tuesday, 5 July 2011

Unable to Run Opatch Cleanup on Grid Infrastructure Home

In an environment where you have the Grid Infrastructure software installed (ie for RAC or ASM configurations), it is important to remember to unlock the software home before performing activities such as cleaning up the OPatch storage.

Databases deployed with no Grid Infrastructure software will not have this issue.

A. Steps to Reproduce Issue


The following command was executed when attempting to clean up the patch storage:
oracle@lnx01:[GRID]$ export ORACLE_SID=GRID; . oraenv
oracle@lnx01:[GRID]$ $ORACLE_HOME/OPatch/opatch util cleanup


B. Error Reported


The following output error was reported below:
Invoking OPatch 11.2.0.1.5
Oracle Interim Patch Installer version 11.2.0.1.5
Copyright (c) 2010, Oracle Corporation.  All rights reserved.
UTIL session
Oracle Home       : /u01/app/oracle/11.2.0.2/grid
Central Inventory : /u01/app/oracle/oraInventory
   from           : /etc/oracle/oraInst.loc
OPatch version    : 11.2.0.1.5
OUI version       : 11.2.0.2.0
OUI location      : /u01/app/oracle/11.2.0.2/grid/oui
Log file location : /u01/app/oracle/11.2.0.2/grid/cfgtoollogs/opatch/opatch2011-05-04_10-36-59AM.log
Patch history file: /u01/app/oracle/11.2.0.2/grid/cfgtoollogs/opatch/opatch_history.txt

OPatchSession cannot load inventory for the given Oracle Home /u01/app/oracle/11.2.0.2/grid. Possible causes are:
   No read or write permission to ORACLE_HOME/.patch_storage
   Central Inventory is locked by another OUI instance
   No read permission to Central Inventory
   The lock file exists in ORACLE_HOME/.patch_storage
   The Oracle Home does not exist in Central Inventory
UtilSession failed: Locker::lock() mkdir /u01/app/oracle/11.2.0.2/grid/.patch_storage

C. Solution 


1. Unlock the Grid Infrastructure Home:
oracle@lnx01:[GRID]$ export ORACLE_SID=GRID; . oraenv
oracle@lnx01:[GRID]$ sudo $ORACLE_HOME/crs/install/rootcrs.pl -unlock


2. Perform the OPatch storage cleanup:
oracle@lnx01:[GRID]$ $ORACLE_HOME/OPatch/opatch util cleanup

3. Lock the Grid Infrastructure Home:
oracle@lnx01:[GRID]$ sudo $ORACLE_HOME/crs/install/rootcrs.pl -patch 

Tuesday, 12 April 2011

Opatch Fails with Patch ID is null and Error Code 73

This is an issue, I have come across in the past.  This issue can impact most Oracle software homes when applying patches if the incorrect OPatch version exists.

A. Steps to Reproduce Issue


The following command was executed when applying a patch:
oracle@lnx01:[AGENT]$ cd /u01/app/oracle/software/10170020
oracle@lnx01:[AGENT]$ $ORACLE_HOME/OPatch/opatch apply


B. Error Reported


The following output error was reported below:
Invoking OPatch 10.2.0.4.5
Oracle Interim Patch Installer version 10.2.0.4.5
Copyright (c) 2008, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/agent10g
Central Inventory : /u01/app/oracle/oraInventory
   from           : /etc/oracle/oraInst.loc
OPatch version    : 10.2.0.4.5
OUI version       : 10.2.0.5.0
OUI location      : /u01/app/oracle/agent10g/oui
Log file location : /u01/app/oracle/agent10g/cfgtoollogs/opatch/opatch2011-03-15_07-55-15AM.log
Patch history file: /u01/app/oracle/agent10g/cfgtoollogs/opatch/opatch_history.txt
ApplySession failed: Patch ID is null.
System intact, OPatch will not attempt to restore the system
OPatch failed with error code 73

C. Solution 


1. Downloading patch 6880880 for the version of the software release (ie. 10.2 or 11.1, 11.2 etc…) from My Oracle Support.

The patch will be in the following format:
p6880880_<release>_<os_platform>.zip

For example in 10gR2 (10.2), the OPatch will be:
p6880880_102000_SOLARIS64.zip

2. Verify current OPatch version:
oracle@lnx01:[AGENT]$ $ORACLE_HOME/OPatch/opatch version
3. Backup the old OPatch
oracle@lnx01:[AGENT]$ cd $ORACLE_HOME
oracle@lnx01:[AGENT]$ mv OPatch/ OPatch.orig

4. Update the latest patch
oracle@lnx01:[AGENT]$ cd /u01/app/oracle/software
oracle@lnx01:[AGENT]$ unzip -d $ORACLE_HOME p6880880_102000_SOLARIS64.zip

5. Verify OPatch version:
oracle@lnx01:[AGENT]$ $ORACLE_HOME/OPatch/opatch version

6. And then re-apply patch.