Friday 21 June 2013

DBCA for RAC returns ORA-15025: could not open disk

An attempt to create a RAC database, the ASM fails when wrong group ownership is set.   It is notable that this issue may also impact standalone database running on ASM.  This issue is likely to occur if you are running different OS user for the Grid Infrastructure Software to your Database Software.

Summary:
Configuration Grid Infrastructure (GRID) Database (DB)
Software Owner (OS User)gridoracle
Primary OS Groupoinstalloinstall
Admin Groupasmadmindba
ASM Groupasmdbaasmdba
ORACLE_HOME /u01/app/11.2.0.3/grid/u01/app/11.2.0.3/db

A. Steps to Reproduce Issue

1.  OS user for Grid software is "grid":
grid@lnx01:[GRID]$ id -a
uid=54320(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54324(asmdba),54325(asmadmin),54326(asmoper)

2. OS user for Database software is "oracle":
oracle@lnx01:[DB]$ id -a
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(asmdba)

NOTE: The "asmdba" OS group is assigned to both OS users "grid" and "oracle".

3. As oracle OS user execute "dbca" to create the database:
Method 1 - DBCA (interactive mode):
oracle@lnx01:[DB]$  dbca
Method 2 - run DBCA (silent mode):
oracle@lnx01:[DB]$  dbca -silent -responsefile dbca11203.rsp

B. Error Reported

DBCA reports the following error when attempting to create a database:
2013-05-02 17:59:22.197000 +10:00
NOTE: Loaded library: System
ORA-15025: could not open disk "/dev/oracleasm/disks/DATA4"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
ORA-15025: could not open disk "/dev/oracleasm/disks/DATA2"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
ORA-15025: could not open disk "/dev/oracleasm/disks/DATA3"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
ORA-15025: could not open disk "/dev/oracleasm/disks/DATA1"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
SUCCESS: diskgroup DATA was mounted
Errors in file /u01/app/diag/rdbms/blue/BLUE/trace/BLUE_ora_29895.trc
(incident=1401):
ORA-00600: internal error code, arguments: [kfioTranslateIO03], [], [], [],
[], [], [], [], [], [], [], []
Incident details in:
/u01/app/diag/rdbms/blue/BLUE/incident/incdir_1401/BLUE_ora_29895_i1401.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2013-05-02 17:59:24.033000 +10:00
Dumping diagnostic data in directory=[cdmp_20130502175924], requested by
(instance=1, osid=29895), summary=[incident=1401].
Errors in file /u01/app/diag/rdbms/blue/BLUE/trace/BLUE_ora_29895.trc
(incident=1402):
ORA-00600: internal error code, arguments: [17090], [], [], [], [], [], [],
[], [], [], [], []
Incident details in:
/u01/app/diag/rdbms/blue/BLUE/incident/incdir_1402/BLUE_ora_29895_i1402.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2013-05-02 17:59:25.037000 +10:00
ERROR: unrecoverable error ORA-600 raised in ASM I/O path; terminating process
29895
Dumping diagnostic data in directory=[cdmp_20130502175925], requested by
(instance=1, osid=29895), summary=[incident=1402].
Shutting down instance (abort)
License high water mark = 2
USER (ospid: 29941): terminating the instance
Instance terminated by USER, pid = 29941
Instance shutdown complete


C. Solution


1. Verify oradism permission has the correct setuid under the ${GRID_HOME}/bin directory:
grid@lnx01:[GRID]$ ls -lrt /u01/app/11.2.0.3/grid/bin/oradism
-rwsr-x--- 1 root oinstall 71758 Sep 17  2011 /u01/app/11.2.0.3/grid/bin/oradism


2. Verify that the oracle permissions (6751) are correct under the ${GRID_HOME}/bin directory:
grid@lnx01:[GRID]$ ls -l /u01/app/11.2.0.3/grid/bin/oracle
-rwsr-s--x 1 grid oinstall 204090154 May  2 13:34 /u01/app/11.2.0.3/grid/bin/oracle


3. Verify that the oracle permissions (6751) are correct under the ${DB_HOME}/bin directory:
grid@lnx01:[GRID]$ ls -l /u01/app/11.2.0.3/db/bin/oracle
-rwsr-s--x 1 oracle oinstall 221309039 May  2 13:49 /u01/app/11.2.0.3/db/bin/oracle


NOTE: If permission is not correct then run as grid OS user under ${GRID_HOME}:
grid@lnx01:[GRID]$ /u01/app/11.2.0.3/grid/bin/setasmgidwrap o=/u01/app/11.2.0.3/db/bin/oracle

4. Change group ownership to asmadmin for oracle binary under ${DB_HOME}:
grid@lnx01:[GRID]$ chgrp asmadmin /u01/app/11.2.0.3/db/bin/oracle

5. Verify that the correct permission and ownership is fixed under ${DB_HOME}:
grid@lnx01:[GRID]$ ls -rlt /u01/app/11.2.0.3/db/bin/oracle
-rwxr-x--x 1 oracle asmadmin 221309039 May  2 13:49 /u01/app/11.2.0.3/db/bin/oracle


6. Verify the SS_ASM_GRP setting in config.c file under ${GRID_HOME}/rdbms/lib:
grid@lnx01:[GRID]$ cat /u01/app/11.2.0.3/grid/rdbms/lib/config.c|grep SS_ASM_GRP
#define SS_ASM_GRP "oinstall"
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP, SS_ASM_GRP};  


6. Verify the SS_ASM_GRP setting in config.c file under ${DB_HOME}/rdbms/lib:
grid@lnx01:[GRID]$ cat /u01/app/11.2.0.3/db/rdbms/lib/config.c|grep SS_ASM_GRP
#define SS_ASM_GRP ""
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP, SS_ASM_GRP};  


7. Edit the file config.c file under ${GRID_HOME}/rdbms/lib and ${DB_HOME}/rdbms/lib  and set the following: SS_ASM_GRP "asmdba"

8. Re-verify that the correct value is now set for SS_ASM_GRP under ${GRID_HOME} and ${DB_HOME}:
grid@lnx01:[GRID]$ cat ${GRID_HOME}/rdbms/lib/config.c|grep SS_ASM_GRP
#define SS_ASM_GRP "asmdba"
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP, SS_ASM_GRP};
oracle@lnx01:[DB]$ cat ${DB_HOME}/rdbms/lib/config.c|grep SS_ASM_GRP
#define SS_ASM_GRP "asmdba"
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP, SS_ASM_GRP};  


9. Relink Grid software as grid OS user using sudo:
grid@lnx01:[GRID]$ sudo /u01/app/11.2.0.3/grid/crs/install/rootcrs.pl -unlock
grid@lnx01:[GRID]$ relink all
sudo /u01/app/11.2.0.3/grid/crs/install/rootcrs.pl -patch

NOTE: Check log for errors: /u01/app/11.2.0.3/grid/install/relink.log


10. Make sure this is correct on all nodes in the cluster.  If not correct this issue using the steps above.

11. And then re-run "dbca"

1 comment:

  1. thanks! this helped quite a bit during some grid upgrade errors.

    ReplyDelete