ORA-00600 内部错误代码, 参数 [kdsgrp1]
物理备库报错:ORA-00600 内部错误代码, 参数 [kdsgrp1], [], [], [], [], [], [], [], [], [], [], []产生错误的sql语句是一个s
物理备库报错:ORA-00600 内部错误代码, 参数 [kdsgrp1], [], [], [], [], [], [], [], [], [], [], []
产生错误的sql语句是一个select语句。
查询该错误:
Applies to:
Oracle Server - Enterprise Edition - Version 10.2.0.4 and later
Information in this document applies to any platform.
***Checked for relevance on 12-Dec-2012***
Purpose
This document discusses the ora-600 [kdsgrp1] error, its possible causes and the work around solutions that can be tried.
Troubleshooting Steps
The ora-600 [kdsgrp1] error is thrown when a fetch operation fails to find the expected row. The error is hit in memory and so may be a memory only error or an error that results from corruption on disk.
This error may indicate (but is not restricted to) any of the following conditions:
Lost writes
Parallel DML issues
Index corruption
Data block corruption
Consistent read [CR] issues
Buffer cache corruption
Common Work Around Solutions
If the issue is in memory only we can try to immediately resolve the issue by flushing the buffer cache but remember to consider the performance impact on production systems:
alter system flush buffer_cache;
If we have an intermittent consistent read issue we can try disabling rowCR which is an optimization to reduce consistent-read rollbacks during queries by setting _row_cr=FALSE in the initialization files. However, this could lead to performance degradation of queries. Please check the ratio of the two statistics "RowCR hits"/"RowCR attempts" to determine whether the workaround is to be used.
If this is a result of index corruption then we can drop and rebuild the index. Note that this will require a maintenance window on production systems.
Root Cause Determination
Now lets look at how we discover the root cause of the problem: the first step in finding the root cause of this issue is to inspect the generated trace file. The ora-600 will generate both a trace file in the trace directory and an incident file under the incident id within the incident directory.
The top part of the trace file tells us the SQL that was being run when the error was hit:
----- Current SQL Statement for this session (sql_id=9mamr7xn4wg7x) -----
This immediately shows us the data objects that were accessed. Searching the trace file for the text string 'Plan Table' will locate the SQL execution plan that is dumped within this trace file. For a persistent issue this allows us to determine which indexes have been accessed and so identify indexes that should be validated to check for block corruption:
SQL> analyze index scott.pk_dept validate structure online;
Index analyzed.
An other approach we can take is to use the file and block information contained in the trace file. At the top of the trace file we will find information on the block where the corruption was found:
*** SESSION ID:(3202.5644) 2011-03-19 04:12:16.910
row 07c7c8c7.a continuation at
file# 31 block# 510151 slot 11 not found
This information can be used to identify the object details in dba_extents:
Select owner, segment_name, segment_type, partition_name,tablespace_name
From dba_extents
Where relative_fno =
And
We can then validate this object, for example a table and all it's indexes:
Analyze table scott.dept validate structure cascade online;
Remember that we may be dealing with a permanent corruption that is not located in the object blocks themselves. Examples of this include:
Dictionary corruption issue from transportable tablespace operations: check dba_tablespaces to see if the tablespace has been plugged in.
Lost writes in ASM diskgroup mirrors - most likely to be seen when there is heavy IO and disk resync activity. To check this run dbms_diskgroup.checkfile to detect mirror discrepancies
If analyze reports no corruption then check if there are any chained rows on the table. If these exist then we may have an undetected corruption and the issue should reproduce whenever the SQL is run. Exporting the table will also detect this issue.
If analyze and exporting the table (in the presence of chained rows) both report no errors then this should be considered a consistent read issue.
Once you understand the nature of the problem you can review the list of known bugs and determine which one matches your condition. If you cannot determine which issue is affecting you then open a service request with Oracle Support and upload the RDBMS and ASM (if applicable)instance alert logs for all nodes, any trace and incident files generated and a full description of the nature of the problem.
我们的数据库并没有块损坏,,刷新一下buffer cache就不报错了。
相关阅读:
GoldenGate不使用数据泵完成Oracle-Oracle的双向复制
使用GoldenGate的数据泵进行Oracle-Oracle的单向复制
如何对 Oracle 数据泵(expdp/impdp) 进行 debug
Oracle 数据库导出数据泵(EXPDP)文件存放的位置
Oracle 10g 数据泵分区表的导出
上一篇: 瞧这段简单的代码
下一篇: thinkPHP生手找不到一个自定义函数
推荐阅读
-
ORA-00600 内部错误代码, 参数 [kdsgrp1]
-
ORA-00600 内部错误代码, 参数 [kdsgrp1]
-
ORA-00600: 内部错误代码, 参数: [evapls1], [], [], [], [], [], [], []
-
ORA-00600: 内部错误代码, 参数: [kqlnrc_1]
-
ORA-00600: 内部错误代码, 参数: [evapls1], [], [], [], [], [], [], []
-
ORA-00600: 内部错误代码, 参数: [4194], [24], [22], [], [], [], [], [
-
ORA-00600: 内部错误代码, 参数: [kqlnrc_1]
-
ORA-00600: 内部错误代码, 参数: [kkoipt:invalid join method]