记一次hive建表报错
程序员文章站
2022-04-13 08:49:34
...
执行create table table_name xxx as select xxx的语句报错如下
Query ID = azkaban_20191122140707_d6f0dacf-4877-44ae-846d-46d0ed565c2f
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1567579458012_599825, Tracking URL = http://hostname:8088/proxy/application_1567579458012_599825/
Kill Command = /opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/bin/hadoop job -kill job_1567579458012_599825
Hadoop job information for Stage-1: number of mappers: 24; number of reducers: 0
2019-11-22 14:07:50,833 Stage-1 map = 0%, reduce = 0%
2019-11-22 14:07:57,030 Stage-1 map = 8%, reduce = 0%, Cumulative CPU 2.34 sec
2019-11-22 14:07:58,060 Stage-1 map = 29%, reduce = 0%, Cumulative CPU 22.77 sec
2019-11-22 14:07:59,087 Stage-1 map = 46%, reduce = 0%, Cumulative CPU 43.83 sec
2019-11-22 14:08:00,120 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 70.92 sec
2019-11-22 14:08:01,161 Stage-1 map = 75%, reduce = 0%, Cumulative CPU 82.7 sec
2019-11-22 14:08:03,206 Stage-1 map = 77%, reduce = 0%, Cumulative CPU 92.6 sec
2019-11-22 14:08:04,230 Stage-1 map = 87%, reduce = 0%, Cumulative CPU 115.04 sec
2019-11-22 14:08:05,263 Stage-1 map = 90%, reduce = 0%, Cumulative CPU 115.92 sec
2019-11-22 14:08:06,300 Stage-1 map = 95%, reduce = 0%, Cumulative CPU 134.32 sec
2019-11-22 14:08:07,322 Stage-1 map = 98%, reduce = 0%, Cumulative CPU 135.86 sec
2019-11-22 14:08:09,392 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 137.59 sec
MapReduce Total cumulative CPU time: 2 minutes 17 seconds 590 msec
Ended Job = job_1567579458012_599825
Stage-4 is filtered out by condition resolver.
Stage-3 is selected by condition resolver.
Stage-5 is filtered out by condition resolver.
Starting Job = job_1567579458012_599827, Tracking URL = http://hostname:8088/proxy/application_1567579458012_599827/
Kill Command = /opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/bin/hadoop job -kill job_1567579458012_599827
Hadoop job information for Stage-3: number of mappers: 18; number of reducers: 0
2019-11-22 14:08:15,382 Stage-3 map = 0%, reduce = 0%
2019-11-22 14:08:20,536 Stage-3 map = 44%, reduce = 0%, Cumulative CPU 7.47 sec
2019-11-22 14:08:21,571 Stage-3 map = 78%, reduce = 0%, Cumulative CPU 14.2 sec
2019-11-22 14:08:22,606 Stage-3 map = 89%, reduce = 0%, Cumulative CPU 17.18 sec
2019-11-22 14:08:23,629 Stage-3 map = 94%, reduce = 0%, Cumulative CPU 18.08 sec
2019-11-22 14:08:24,661 Stage-3 map = 100%, reduce = 0%, Cumulative CPU 19.04 sec
MapReduce Total cumulative CPU time: 19 seconds 40 msec
Ended Job = job_1567579458012_599827
Moving data to: hdfs://nameservice1/user/hive/warehouse/collection.db/kaohemingdan_yq11
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: Add request failed : INSERT INTO `COLUMNS_V2` (`CD_ID`,`COMMENT`,`COLUMN_NAME`,`TYPE_NAME`,`INTEGER_IDX`) VALUES (?,?,?,?,?)
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)
at org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:902)
at sun.reflect.GeneratedMethodAccessor58.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)
at com.sun.proxy.$Proxy6.createTable(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1466)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1499)
at sun.reflect.GeneratedMethodAccessor57.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy8.create_table_with_environment_context(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:9207)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:9191)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
NestedThrowablesStackTrace:
java.sql.BatchUpdateException: Duplicate entry '1155216-??????' for key 'PRIMARY'
at sun.reflect.GeneratedConstructorAccessor75.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.Util.getInstance(Util.java:387)
at com.mysql.jdbc.SQLError.createBatchUpdateException(SQLError.java:1161)
at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:1773)
at com.mysql.jdbc.PreparedStatement.executeBatchInternal(PreparedStatement.java:1257)
at com.mysql.jdbc.StatementImpl.executeBatch(StatementImpl.java:958)
at com.jolbox.bonecp.StatementHandle.executeBatch(StatementHandle.java:424)
at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:372)
at org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:628)
at org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:409)
at org.datanucleus.store.rdbms.scostore.JoinListStore.internalAdd(JoinListStore.java:304)
at org.datanucleus.store.rdbms.scostore.AbstractListStore.addAll(AbstractListStore.java:136)
at org.datanucleus.store.rdbms.mapping.java.CollectionMapping.postInsert(CollectionMapping.java:136)
at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:519)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:167)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:143)
at org.datanucleus.state.JDOStateManager.internalMakePersistent(JDOStateManager.java:3784)
at org.datanucleus.state.JDOStateManager.makePersistent(JDOStateManager.java:3760)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2219)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2314)
at org.datanucleus.store.rdbms.mapping.java.PersistableMapping.setObjectAsValue(PersistableMapping.java:567)
at org.datanucleus.store.rdbms.mapping.java.PersistableMapping.setObject(PersistableMapping.java:326)
at org.datanucleus.store.rdbms.fieldmanager.ParameterSetter.storeObjectField(ParameterSetter.java:193)
at org.datanucleus.state.JDOStateManager.providedObjectField(JDOStateManager.java:1269)
at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoProvideField(MStorageDescriptor.java)
at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoProvideFields(MStorageDescriptor.java)
at org.datanucleus.state.JDOStateManager.provideFields(JDOStateManager.java:1346)
at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:289)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:167)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:143)
at org.datanucleus.state.JDOStateManager.internalMakePersistent(JDOStateManager.java:3784)
at org.datanucleus.state.JDOStateManager.makePersistent(JDOStateManager.java:3760)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2219)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2314)
at org.datanucleus.store.rdbms.mapping.java.PersistableMapping.setObjectAsValue(PersistableMapping.java:567)
at org.datanucleus.store.rdbms.mapping.java.PersistableMapping.setObject(PersistableMapping.java:326)
at org.datanucleus.store.rdbms.fieldmanager.ParameterSetter.storeObjectField(ParameterSetter.java:193)
at org.datanucleus.state.JDOStateManager.providedObjectField(JDOStateManager.java:1269)
at org.apache.hadoop.hive.metastore.model.MTable.jdoProvideField(MTable.java)
at org.apache.hadoop.hive.metastore.model.MTable.jdoProvideFields(MTable.java)
at org.datanucleus.state.JDOStateManager.provideFields(JDOStateManager.java:1346)
at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:289)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:167)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:143)
at org.datanucleus.state.JDOStateManager.internalMakePersistent(JDOStateManager.java:3784)
at org.datanucleus.state.JDOStateManager.makePersistent(JDOStateManager.java:3760)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2219)
at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065)
at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1913)
at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)
at org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:902)
at sun.reflect.GeneratedMethodAccessor58.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)
at com.sun.proxy.$Proxy6.createTable(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1466)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1499)
at sun.reflect.GeneratedMethodAccessor57.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy8.create_table_with_environment_context(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:9207)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:9191)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '1155216-??????' for key 'PRIMARY'
at sun.reflect.GeneratedConstructorAccessor74.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404)
at com.mysql.jdbc.Util.getInstance(Util.java:387)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:934)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3966)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3902)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2526)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2673)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2549)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1861)
at com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2073)
at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:1751)
... 75 more
)
MapReduce Jobs Launched:
Stage-Stage-1: Map: 24 Cumulative CPU: 137.59 sec HDFS Read: 67114819 HDFS Write: 4234 SUCCESS
Stage-Stage-3: Map: 18 Cumulative CPU: 19.04 sec HDFS Read: 45708 HDFS Write: 1798 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 36 seconds 630 msec
解决办法是将create语句和insert语句分开写.具体原因后续查找吧