impdp ORA-39002,ORA-39166,ORA-39164的问题及解决
今天在做imp和impdp的性能测试时,发现如果表中存在lob字段,加载真是慢的厉害,每秒钟大概1000条的样子,按照这种速度,基本上不
今天在做imp和impdp的性能测试时,发现如果表中存在lob字段,,加载真是慢的厉害,每秒钟大概1000条的样子,按照这种速度,基本上不用干活了。
比如5千万条记录,50000000/1000/60/60=13.89小时,时间是无法接受的。
所以尝试使用impdp来看看性能的提升。
导出的表里面有9千万条记录,而且做了分区,分区大概有300个。如果使用全表导出导入,在之前的测试中,测试5千万数据,大概会有3个多小时,也算是比较长的时间,而且随着数据量的增大,时间还会不断的增长。
个人尝试从分区的角度做些工作。
导出分区,然后按照分区导入。
使用的impdp命令如下,已经做了remap_schema,但是不管怎么尝试,都会抛出如下的错误。事实上这个分区是存在的。
impdp mig_test/mig_test directory=memo_dir dumpfile=par1_mo1_memo.dmp logfile=par1_mo1_memo_imp.log tables=mig_test.mo1_memo:P9_A0_E5 TABLE_EXISTS_ACTION=append REMAP_SCHEMA=prdappo:MIG_TEST DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39002: invalid operation
ORA-39166: Object MIG_TEST.MO1_MEMO was not found.
ORA-39164: Partition MIG_TEST.MO1_MEMO:P9_A0_E5 was not found.
尝试了各种方法。还是没有效果。最后查看metalink找到了一些思路。(Doc ID 550200.1)
通过expdp&impdp把11g的数据迁移到10g平台的要点
Oracle Data Pump使用范例及部分注意事项(expdp/impdp)
Oracle datapump expdp/impdp 导入导出数据库时hang住
expdp/impdp做Oracle 10g 到11g的数据迁移
CAUSE
Unlike fromuser/touser and tables functionality in traditional imp, DataPump assumes that if TABLES parameter does not include schema name then the table is owned by current user doing import and will not find correct table to import unless the user doing import is same user which owns the tables in export dump and has IMP_FULL_DATABASE role so that user can import into other schemas.
SOLUTION
1. Either grant IMP_FULL_DATABASE to user which owns the objects in the export dump so that user can import into other schema referenced REMAP_SCHEMA and run DataPump import as that schema, ie
SQL> grant IMP_FULL_DATABASE to old_user;
impdp old_user/passwd TABLES=TABLEA:TABLEA_PARTITION1 /
REMAP_SCHEMA=old_user:new_user DUMPFILE=exp01.dmp,exp02.dmp,exp03.dmp /
DIRECTORY=data_pump_dir
Or:
2. Be sure to include the schema name in TABLES parameter so the correct table can be found to import from user/to user referenced in REMAP_SCHEMA, ie
impdp system/passwd TABLES=old_user.TABLEA:TABLEA_PARTITION1 /
REMAP_SCHEMA=old_user:new_user DUMPFILE=exp01.dmp,exp02.dmp,exp03.dmp /
DIRECTORY=data_pump_dir
最后尝试使用如下的命令,终于有反应了,分区里竟然还是空的。:)
impdp mig_test/mig_test directory=memo_dir dumpfile=par1_mo1_memo.dmp logfile=par1_mo1_memo_imp.log tables=prdappo.mo1_memo:P9_A0_E5 remap_schema=prdappo:mig_test TABLE_EXISTS_ACTION=append DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS
Master table "MIG_TEST"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "MIG_TEST"."SYS_IMPORT_TABLE_01": mig_test/******** directory=memo_dir dumpfile=par1_mo1_memo.dmp logfile=par1_mo1_memo_imp.log tables=prdappo.mo1_memo:P9_A0_E5 remap_schema=prdappo:mig_test TABLE_EXISTS_ACTION=append DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "MIG_TEST"."MO1_MEMO" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "MIG_TEST"."MO1_MEMO":"P9_A0_E5" 0 KB 0 rows
Job "MIG_TEST"."SYS_IMPORT_TABLE_01" successfully completed at 17:23:04
上一篇: DataTable修改后,如何更新数据库