Welcome Guest ( Log In | Register )


 
 
 
 
 
 

 
 
Oracle 

Performance Tuning Reference poster
 
Oracle training in Linux 

commands
 
Oracle training Weblogic Book
 
Easy Oracle Jumpstart
 
Oracle training & performance tuning books
 
Burleson Consulting Remote DB Administration
 
 
 
Reply to this topicStart new topic
> Abrupt Shutdown/Unavailability of One Node in RAC DB
hog
post Jan 12 2017, 12:44 AM
Post #1


Newbie
*

Group: Members
Posts: 1
Joined: 12-January 17
Member No.: 51,737



Hi,

I am a DB administrator and we have two node cluster DB (using Oracle 11gR2). Today when i came to my seat and i checked the system then i found that one of my node somehow got shutdown or unavailable to the RAC. I tried to find the reason but failed to find any. I m new in this field and was looking forward for your help. Logs of my node are:

Wed Jan 11 15:02:51 2017
Thread 2 advanced to log sequence 17689 (LGWR switch)
Current log# 3 seq# 17689 mem# 0: +temp1/mcm6/redo03_02.log
Current log# 3 seq# 17689 mem# 1: +fract1/mcm6/redo03_01n.log
Wed Jan 11 15:04:34 2017
Archived Log entry 13139 added for thread 2 sequence 17688 ID 0x253faa8a dest 1:
Wed Jan 11 16:06:40 2017
Trace dumping is performing id=[cdmp_20170111160640]
Wed Jan 11 16:06:42 2017
Reconfiguration started (old inc 4, new inc 6)
List of instances:
2 (myinst: 2)
Global Resource Directory frozen
* dead instance detected - domain 0 invalid = TRUE
Communication channels reestablished
--More--
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Wed Jan 11 16:06:42 2017
Wed Jan 11 16:06:42 2017
LMS 0: 1 GCS shadows cancelled, 1 closed, 0 Xw survived
LMS 4: 2 GCS shadows cancelled, 2 closed, 0 Xw survived
Wed Jan 11 16:06:42 2017
Wed Jan 11 16:06:42 2017
LMS 3: 3 GCS shadows cancelled, 2 closed, 0 Xw survived
LMS 5: 2 GCS shadows cancelled, 0 closed, 0 Xw survived
Wed Jan 11 16:06:42 2017
LMS 2: 25 GCS shadows cancelled, 6 closed, 0 Xw survived
Wed Jan 11 16:06:42 2017
LMS 1: 1 GCS shadows cancelled, 1 closed, 0 Xw survived
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Post SMON to start 1st pass IR
Wed Jan 11 16:06:43 2017
Instance recovery: looking for dead threads
Beginning instance recovery of 1 threads
Submitted all GCS remote-cache requests
Post SMON to start 1st pass IR
Fix write in gcs resources
Reconfiguration complete
parallel recovery started with 32 processes
Started redo scan
Wed Jan 11 16:06:57 2017
Completed redo scan
read 183278 KB redo, 17317 data blocks need recovery
Started redo application at
Thread 1: logseq 20127, block 400233
Recovery of Online Redo Log: Thread 1 Group 2 Seq 20127 Reading mem 0
Mem# 0: +fract1/mcm6/redo02_01.log
Mem# 1: +temp1/mcm6/redo02_02.log
Wed Jan 11 16:07:13 2017
Reconfiguration started (old inc 6, new inc 8)
List of instances:
1 2 (myinst: 2)
Global Resource Directory frozen
Communication channels reestablished
Wed Jan 11 16:07:14 2017
* domain 0 valid = 1 according to instance 1
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Wed Jan 11 16:07:14 2017
LMS 4: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Wed Jan 11 16:07:14 2017
LMS 2: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Wed Jan 11 16:07:14 2017
LMS 5: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Wed Jan 11 16:07:14 2017
LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Wed Jan 11 16:07:14 2017
LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Wed Jan 11 16:07:14 2017
LMS 3: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
--More--
Wed Jan 11 16:07:14 2017
Completed redo application of 73.57MB
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Submitted all GCS remote-cache requests
Fix write in gcs resources
Reconfiguration complete
Completed instance recovery at
Thread 1: logseq 20127, block 766790, scn 13824061331
16000 data blocks read, 18651 data blocks written, 183278 redo k-bytes read
Thread 1 advanced to log sequence 20128 (thread recovery)
Redo thread 1 internally disabled at seq 20128 (SMON)
Wed Jan 11 16:07:24 2017
Thread 2 advanced to log sequence 17690 (LGWR switch)
Current log# 4 seq# 17690 mem# 0: +temp1/mcm6/redo04_02.log
Current log# 4 seq# 17690 mem# 1: +fract1/mcm6/redo04_01n.log
Wed Jan 11 16:07:55 2017
Archived Log entry 13142 added for thread 2 sequence 17689 ID 0x253faa8a dest 1:
Wed Jan 11 16:08:09 2017
Archived Log entry 13143 added for thread 1 sequence 20127 ID 0x253faa8a dest 1:
Wed Jan 11 16:08:10 2017
ARC2: Archiving disabled thread 1 sequence 20128
Archived Log entry 13144 added for thread 1 sequence 20128 ID 0x253faa8a dest 1:
Wed Jan 11 16:09:40 2017
SMON: Failed to acquire SEG2, skipping transaction 53.16.2695169
SMON: Parallel transaction recovery tried
Wed Jan 11 16:17:07 2017
db_recovery_file_dest_size of 204800 MB is 2.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Jan 11 16:51:45 2017
Thread 2 advanced to log sequence 17691 (LGWR switch)
Current log# 3 seq# 17691 mem# 0: +temp1/mcm6/redo03_02.log
Current log# 3 seq# 17691 mem# 1: +fract1/mcm6/redo03_01n.log
Wed Jan 11 16:52:20 2017
opidcl aborting process unknown ospid (6193) as a result of ORA-3113
Wed Jan 11 16:52:26 2017
Archived Log entry 13145 added for thread 2 sequence 17690 ID 0x253faa8a dest 1:
Wed Jan 11 17:28:34 2017
Errors in file /u01/app/oracle/diag/rdbms/mcm26/mcm62/trace/mcm62_j001_12174.trc:
Wed Jan 11 17:28:56 2017
Trace dumping is performing id=[cdmp_20170111172856]
Wed Jan 11 17:33:10 2017
Errors in file /u01/app/oracle/diag/rdbms/mcm26/mcm62/trace/mcm62_j001_12174.trc:
Wed Jan 11 17:33:13 2017
Trace dumping is performing id=[cdmp_20170111173313]
Wed Jan 11 17:36:57 2017
Errors in file /u01/app/oracle/diag/rdbms/mcm26/mcm62/trace/mcm62_j001_12174.trc:
Wed Jan 11 17:37:00 2017
Trace dumping is performing id=[cdmp_20170111173700]
Wed Jan 11 17:41:22 2017
Errors in file /u01/app/oracle/diag/rdbms/mcm26/mcm62/trace/mcm62_j001_12174.trc:
Wed Jan 11 17:41:24 2017
Trace dumping is performing id=[cdmp_20170111174124]
Wed Jan 11 17:44:01 2017
--More--
Thread 2 advanced to log sequence 17692 (LGWR switch)
Current log# 4 seq# 17692 mem# 0: +temp1/mcm6/redo04_02.log
Current log# 4 seq# 17692 mem# 1: +fract1/mcm6/redo04_01n.log
Wed Jan 11 17:45:19 2017
Archived Log entry 13148 added for thread 2 sequence 17691 ID 0x253faa8a dest 1:
Wed Jan 11 17:45:33 2017
Errors in file /u01/app/oracle/diag/rdbms/mcm26/mcm62/trace/mcm62_j001_12174.trc:
Wed Jan 11 17:45:36 2017
Trace dumping is performing id=[cdmp_20170111174536]
Wed Jan 11 17:49:43 2017
Errors in file /u01/app/oracle/diag/rdbms/mcm26/mcm62/trace/mcm62_j001_12174.trc:
Wed Jan 11 17:49:46 2017
Trace dumping is performing id=[cdmp_20170111174946]
Wed Jan 11 17:54:11 2017
Errors in file /u01/app/oracle/diag/rdbms/mcm26/mcm62/trace/mcm62_j001_12174.trc:
Wed Jan 11 17:54:14 2017
Trace dumping is performing id=[cdmp_20170111175414]
Wed Jan 11 17:58:13 2017
Errors in file /u01/app/oracle/diag/rdbms/mcm26/mcm62/trace/mcm62_j001_12174.trc:
Wed Jan 11 17:58:16 2017
Trace dumping is performing id=[cdmp_20170111175816]
Wed Jan 11 18:02:53 2017
Errors in file /u01/app/oracle/diag/rdbms/mcm26/mcm62/trace/mcm62_j001_12174.trc:
Wed Jan 11 18:02:57 2017
Trace dumping is performing id=[cdmp_20170111180257]
Wed Jan 11 18:06:38 2017
Thread 2 advanced to log sequence 17693 (LGWR switch)
Current log# 3 seq# 17693 mem# 0: +temp1/mcm6/redo03_02.log
Current log# 3 seq# 17693 mem# 1: +fract1/mcm6/redo03_01n.log
Wed Jan 11 18:06:47 2017
Archived Log entry 13152 added for thread 2 sequence 17692 ID 0x253faa8a dest 1:
Wed Jan 11 18:06:56 2017
Errors in file /u01/app/oracle/diag/rdbms/mcm26/mcm62/trace/mcm62_j001_12174.trc:
Wed Jan 11 18:06:59 2017
Trace dumping is performing id=[cdmp_20170111180659]
Wed Jan 11 18:11:46 2017
Errors in file /u01/app/oracle/diag/rdbms/mcm26/mcm62/trace/mcm62_j001_12174.trc:
Wed Jan 11 18:11:49 2017
Trace dumping is performing id=[cdmp_20170111181149]
Wed Jan 11 18:15:50 2017
Errors in file /u01/app/oracle/diag/rdbms/mcm26/mcm62/trace/mcm62_j001_12174.trc:
Wed Jan 11 18:15:53 2017
Trace dumping is performing id=[cdmp_20170111181553]
Wed Jan 11 18:17:12 2017


Please look in to this log and someone please tell me what really happened!!!!
regards
Go to the top of the page
 
+Quote Post
burleson
post Jan 13 2017, 11:50 AM
Post #2


Advanced Member
***

Group: Members
Posts: 13,573
Joined: 26-January 04
Member No.: 13



Hi Hog and welcome to the forum!

>> Please look in to this log and someone please tell me what really happened

You need to read the first trace file and see what the cause is, for you dead node connection:

http://www.dba-oracle.com/real_application...c_recovery.html

Good Luck!




--------------------
Hope this helps . . .

Donald K. Burleson
Oracle Press author
Author of Oracle Tuning: The Definitive Reference
Go to the top of the page
 
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 

Lo-Fi Version Time is now: 20th October 2017 - 09:13 AM