FE/BE RPC 超时
xcodeman 发布于2021-06 浏览:4770 回复:7
0
收藏
快速回复

几十万条数据批量写入,写入一段时间后 JDBC 会报错

doris 版本 0.14.11

	Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 12 in stage 2.0 failed 1 times, most recent failure: Lost task 12.0 in stage 2.0 (TID 14, localhost, executor driver): java.sql.BatchUpdateException: rpc failed, host: 172.16.66.121
		at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
		at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
		at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
		at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
		at com.mysql.cj.util.Util.handleNewInstance(Util.java:192)
		at com.mysql.cj.util.Util.getInstance(Util.java:167)
		at com.mysql.cj.util.Util.getInstance(Util.java:174)
		at com.mysql.cj.jdbc.exceptions.SQLError.createBatchUpdateException(SQLError.java:224)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchedInserts(ClientPreparedStatement.java:755)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchInternal(ClientPreparedStatement.java:426)
		at com.mysql.cj.jdbc.StatementImpl.executeBatch(StatementImpl.java:796)
		at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:671)
		at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:838)
		at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:838)
		at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
		at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
		at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
		at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
		at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
		at org.apache.spark.scheduler.Task.run(Task.scala:123)
[INFO] 2021-06-10 10:07:16.001  - [taskAppId=TASK-52-170627-182250]:[121] -  -> 	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
		at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
		at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
		at java.lang.Thread.run(Thread.java:748)
	Caused by: java.sql.SQLSyntaxErrorException: rpc failed, host: 172.16.66.121
		at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
		at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
		at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1092)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1040)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeLargeUpdate(ClientPreparedStatement.java:1347)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchedInserts(ClientPreparedStatement.java:746)
		... 17 more
	
	Driver stacktrace:
		at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891)
		at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879)
		at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878)
		at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
		at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
		at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878)
		at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927)
		at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927)
		at scala.Option.foreach(Option.scala:257)
		at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927)
		at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112)
		at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061)
		at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050)
		at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
		at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738)
		at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
		at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
		at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
		at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
		at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:980)
		at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:978)
		at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
		at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
		at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
		at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:978)
		at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.saveTable(JdbcUtils.scala:838)
		at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:68)
		at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
		at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
		at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
		at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
		at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
		at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
		at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
		at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
		at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
		at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
		at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83)
		at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81)
		at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
		at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
		at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80)
		at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
		at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
		at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
		at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
[INFO] 2021-06-10 10:07:16.452  - [taskAppId=TASK-52-170627-182250]:[121] -  -> 	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
		at org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:515)
		at com.dandanvoice.cloud.pipeline.mysql.MySqlOutput.writeAppend(MySqlOutput.java:170)
		at com.dandanvoice.cloud.pipeline.mysql.MySqlOutput.applyBulkMutations(MySqlOutput.java:120)
		... 9 more
	Caused by: java.sql.BatchUpdateException: rpc failed, host: 172.16.66.121
		at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
		at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
		at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
		at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
		at com.mysql.cj.util.Util.handleNewInstance(Util.java:192)
		at com.mysql.cj.util.Util.getInstance(Util.java:167)
		at com.mysql.cj.util.Util.getInstance(Util.java:174)
		at com.mysql.cj.jdbc.exceptions.SQLError.createBatchUpdateException(SQLError.java:224)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchedInserts(ClientPreparedStatement.java:755)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchInternal(ClientPreparedStatement.java:426)
		at com.mysql.cj.jdbc.StatementImpl.executeBatch(StatementImpl.java:796)
		at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:671)
		at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:838)
		at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:838)
		at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
		at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:980)
		at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
		at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
		at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
		at org.apache.spark.scheduler.Task.run(Task.scala:123)
		at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
		at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
		at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
		... 3 more
	Caused by: java.sql.SQLSyntaxErrorException: rpc failed, host: 172.16.66.121
		at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
		at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
		at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1092)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1040)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeLargeUpdate(ClientPreparedStatement.java:1347)
		at com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchedInserts(ClientPreparedStatement.java:746)
		... 17 more

fe 的 log 如下

2021-06-10 10:07:15,902 WARN (doris-mysql-nio-pool-5192|124547) [StmtExecutor.handleInsertStmt():1238] handle insert stmt fail: insert_87bc6071b2e34102-a9a4227fe521739d
org.apache.doris.common.UserException: errCode = 2, detailMessage = there is no scanNode Backend. [10002: in black list(Ocurrs time out with specfied time 5000 MILLISECONDS)]
	at org.apache.doris.qe.SimpleScheduler.getHost(SimpleScheduler.java:165) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.Coordinator.computeFragmentHosts(Coordinator.java:965) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.Coordinator.computeFragmentExecParams(Coordinator.java:772) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.Coordinator.exec(Coordinator.java:399) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.StmtExecutor.handleInsertStmt(StmtExecutor.java:1180) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:367) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:284) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:206) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.ConnectProcessor.dispatch(ConnectProcessor.java:344) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.ConnectProcessor.processOnce(ConnectProcessor.java:545) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:50) ~[palo-fe.jar:3.4.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_292]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_292]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292]
2021-06-10 10:07:15,902 WARN (doris-mysql-nio-pool-5188|124542) [StmtExecutor.handleInsertStmt():1238] handle insert stmt fail: insert_96dbca49ff2a416b-bddf570598ff7c5b
org.apache.doris.common.UserException: errCode = 2, detailMessage = there is no scanNode Backend. [10002: in black list(Ocurrs time out with specfied time 5000 MILLISECONDS)]
	at org.apache.doris.qe.SimpleScheduler.getHost(SimpleScheduler.java:165) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.Coordinator.computeFragmentHosts(Coordinator.java:965) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.Coordinator.computeFragmentExecParams(Coordinator.java:772) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.Coordinator.exec(Coordinator.java:399) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.StmtExecutor.handleInsertStmt(StmtExecutor.java:1180) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:367) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:284) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:206) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.ConnectProcessor.dispatch(ConnectProcessor.java:344) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.ConnectProcessor.processOnce(ConnectProcessor.java:545) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:50) ~[palo-fe.jar:3.4.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_292]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_292]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292]
2021-06-10 10:07:15,904 WARN (doris-mysql-nio-pool-5202|124583) [StmtExecutor.execute():421] errors when abort txn
org.apache.doris.transaction.TransactionNotFoundException: errCode = 2, detailMessage = transaction not found
	at org.apache.doris.transaction.DatabaseTransactionMgr.abortTransaction(DatabaseTransactionMgr.java:963) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.transaction.GlobalTransactionMgr.abortTransaction(GlobalTransactionMgr.java:221) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.transaction.GlobalTransactionMgr.abortTransaction(GlobalTransactionMgr.java:216) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:417) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:284) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:206) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.ConnectProcessor.dispatch(ConnectProcessor.java:344) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.qe.ConnectProcessor.processOnce(ConnectProcessor.java:545) ~[palo-fe.jar:3.4.0]
	at org.apache.doris.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:50) ~[palo-fe.jar:3.4.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_292]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_292]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292]

be log

W0610 10:07:13.889586 13865 utils.cpp:119] fail to report to master: THRIFT_EAGAIN (timed out)
W0610 10:07:13.890522 13865 task_worker_pool.cpp:1534] report TASK failed. status: -1, master host: 172.16.66.121, port:9020
W0610 10:07:15.449587 13867 utils.cpp:119] fail to report to master: THRIFT_EAGAIN (timed out)
W0610 10:07:15.449663 13867 task_worker_pool.cpp:1534] report TABLET failed. status: -1, master host: 172.16.66.121, port:9020
W0610 10:07:15.858852 13853 utils.cpp:75] fail to finish_task. host=172.16.66.121, port=9020, error=finishTask failed: unknown result
W0610 10:07:15.864553 13853 task_worker_pool.cpp:279] finish task failed. status_code=0
W0610 10:07:15.918713 13846 utils.cpp:75] fail to finish_task. host=172.16.66.121, port=9020, error=finishTask failed: unknown result
W0610 10:07:15.925714 13846 task_worker_pool.cpp:279] finish task failed. status_code=0
收藏
点赞
0
个赞
共7条回复 最后由中间开花回复于2021-08
#8中间开花回复于2021-08

2021-08-13 18:20:32,575 INFO (doris-mysql-nio-pool-506|1208) [DatabaseTransactionMgr.abortTransaction():1029] abort transaction: TransactionState. transaction id: 86431, label: insert_7800962c623d46c4-afb0a609e690088c, db id: 10124, table id list: 12327, callback id: -1, coordinator: FE: 172.16.255.146, transaction status: ABORTED, error replicas num: 0, replica ids: , prepare time: 1628850032554, commit time: -1, finish time: 1628850032575, reason: errCode = 2, detailMessage = wait close failed. NodeChannel[17084-10003] add batch req success but status isn't ok, load_id=7800962c623d46c4-afb0a609e690088c, txn_id=86431, backend id=10003:8060, errmsg=tablet writer write failed, tablet_id=26044, txn_id=86431, err=-235 successfully
2021-08-13 18:20:32,575 INFO (doris-mysql-nio-pool-506|1208) [QeProcessorImpl.unregisterQuery():124] deregister query id 7800962c623d46c4-afb0a609e690088c
2021-08-13 18:20:32,575 WARN (doris-mysql-nio-pool-506|1208) [StmtExecutor.execute():426] errors when abort txn
org.apache.doris.transaction.TransactionNotFoundException: errCode = 2, detailMessage = transaction not found
at org.apache.doris.transaction.DatabaseTransactionMgr.abortTransaction(DatabaseTransactionMgr.java:1001) ~[palo-fe.jar:3.4.0]
at org.apache.doris.transaction.GlobalTransactionMgr.abortTransaction(GlobalTransactionMgr.java:239) ~[palo-fe.jar:3.4.0]
at org.apache.doris.transaction.GlobalTransactionMgr.abortTransaction(GlobalTransactionMgr.java:234) ~[palo-fe.jar:3.4.0]
at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:422) ~[palo-fe.jar:3.4.0]
at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:275) ~[palo-fe.jar:3.4.0]
at org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:206) ~[palo-fe.jar:3.4.0]
at org.apache.doris.qe.ConnectProcessor.dispatch(ConnectProcessor.java:344) ~[palo-fe.jar:3.4.0]
at org.apache.doris.qe.ConnectProcessor.processOnce(ConnectProcessor.java:545) ~[palo-fe.jar:3.4.0]
at org.apache.doris.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:50) ~[palo-fe.jar:3.4.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]

0
#7所以一直搁浅回复于2021-07

遇到相同的问题了  flink jdbc写入doris  数据量大就会报错 跟楼主一样

0
#6Ling缪回复于2021-06
#5 willwang704回复
这个是我的FE报错日志[代码]

你和他这个貌似不是一个问题,

登录到这个be上 node=10.25.16.123:8060, 

找到类似日志 errmsg=tablet writer write failed, 

然后看一下上下文

0
#5willwang704回复于2021-06
#2 Ling缪回复
用的是什么导入方式?

这个是我的FE报错日志

2021-06-16 14:07:20,264 WARN (thrift-server-pool-113|838) [StmtExecutor.handleInsertStmt():846] insert failed: close wait failed coz rpc error. node=10.25.16.123:8060, errmsg=tablet writer write failed, tablet_id=10842, txn_id=397907, err=-215
2021-06-16 14:07:20,264 WARN (thrift-server-pool-113|838) [StmtExecutor.handleInsertStmt():894] handle insert stmt fail: insert_2f0d086a6ebe4dca-a6d4bd5872cbc08f
org.apache.doris.common.DdlException: errCode = 2, detailMessage = close wait failed coz rpc error. node=10.25.16.123:8060, errmsg=tablet writer write failed, tablet_id=10842, txn_id=397907, err=-215
        at org.apache.doris.common.ErrorReport.reportDdlException(ErrorReport.java:67) ~[palo-fe.jar:3.4.0]
        at org.apache.doris.qe.StmtExecutor.handleInsertStmt(StmtExecutor.java:847) [palo-fe.jar:3.4.0]
        at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:326) [palo-fe.jar:3.4.0]
        at org.apache.doris.qe.ConnectProcessor.proxyExecute(ConnectProcessor.java:483) [palo-fe.jar:3.4.0]
        at org.apache.doris.service.FrontendServiceImpl.forward(FrontendServiceImpl.java:655) [palo-fe.jar:3.4.0]
        at org.apache.doris.thrift.FrontendService$Processor$forward.getResult(FrontendService.java:1873) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.doris.thrift.FrontendService$Processor$forward.getResult(FrontendService.java:1858) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [spark-dpp-1.0.0.jar:1.0.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_281]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_281]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_281]
2021-06-16 14:07:20,265 WARN (thrift-server-pool-37|748) [StmtExecutor.execute():380] errors when abort txn
org.apache.doris.transaction.TransactionNotFoundException: errCode = 2, detailMessage = transaction not found
        at org.apache.doris.transaction.DatabaseTransactionMgr.abortTransaction(DatabaseTransactionMgr.java:949) ~[palo-fe.jar:3.4.0]
        at org.apache.doris.transaction.GlobalTransactionMgr.abortTransaction(GlobalTransactionMgr.java:210) ~[palo-fe.jar:3.4.0]
        at org.apache.doris.transaction.GlobalTransactionMgr.abortTransaction(GlobalTransactionMgr.java:205) ~[palo-fe.jar:3.4.0]
        at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:376) [palo-fe.jar:3.4.0]
        at org.apache.doris.qe.ConnectProcessor.proxyExecute(ConnectProcessor.java:483) [palo-fe.jar:3.4.0]
        at org.apache.doris.service.FrontendServiceImpl.forward(FrontendServiceImpl.java:655) [palo-fe.jar:3.4.0]
        at org.apache.doris.thrift.FrontendService$Processor$forward.getResult(FrontendService.java:1873) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.doris.thrift.FrontendService$Processor$forward.getResult(FrontendService.java:1858) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [spark-dpp-1.0.0.jar:1.0.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_281]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_281]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_281]
2021-06-16 14:07:20,265 WARN (thrift-server-pool-113|838) [StmtExecutor.execute():380] errors when abort txn
org.apache.doris.transaction.TransactionNotFoundException: errCode = 2, detailMessage = transaction not found
        at org.apache.doris.transaction.DatabaseTransactionMgr.abortTransaction(DatabaseTransactionMgr.java:949) ~[palo-fe.jar:3.4.0]
        at org.apache.doris.transaction.GlobalTransactionMgr.abortTransaction(GlobalTransactionMgr.java:210) ~[palo-fe.jar:3.4.0]
        at org.apache.doris.transaction.GlobalTransactionMgr.abortTransaction(GlobalTransactionMgr.java:205) ~[palo-fe.jar:3.4.0]
        at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:376) [palo-fe.jar:3.4.0]
        at org.apache.doris.qe.ConnectProcessor.proxyExecute(ConnectProcessor.java:483) [palo-fe.jar:3.4.0]
        at org.apache.doris.service.FrontendServiceImpl.forward(FrontendServiceImpl.java:655) [palo-fe.jar:3.4.0]
        at org.apache.doris.thrift.FrontendService$Processor$forward.getResult(FrontendService.java:1873) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.doris.thrift.FrontendService$Processor$forward.getResult(FrontendService.java:1858) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [spark-dpp-1.0.0.jar:1.0.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_281]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_281]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_281]
2021-06-16 14:07:20,267 WARN (thrift-server-pool-168|927) [Coordinator.updateFragmentExecStatus():1369] one instance report fail, query_id=379c18a376da4677-a1cb9d0e0fdb5ece instance_id=379c18a376da4677-a1cb9d0e0fdb5ecf
2021-06-16 14:07:20,267 WARN (thrift-server-pool-168|927) [Coordinator.updateStatus():670] one instance report fail throw updateStatus(), need cancel. job id: -1, query id: 379c18a376da4677-a1cb9d0e0fdb5ece, instance id: 379c18a376da4677-a1cb9d0e0fdb5ecf
2021-06-16 14:07:20,267 WARN (thrift-server-pool-150|875) [StmtExecutor.handleInsertStmt():846] insert failed: close wait failed coz rpc error. node=10.25.16.123:8060, errmsg=tablet writer write failed, tablet_id=10826, txn_id=397881, err=-215
2021-06-16 14:07:20,267 WARN (thrift-server-pool-150|875) [StmtExecutor.handleInsertStmt():894] handle insert stmt fail: insert_379c18a376da4677-a1cb9d0e0fdb5ece
org.apache.doris.common.DdlException: errCode = 2, detailMessage = close wait failed coz rpc error. node=10.25.16.123:8060, errmsg=tablet writer write failed, tablet_id=10826, txn_id=397881, err=-215
        at org.apache.doris.common.ErrorReport.reportDdlException(ErrorReport.java:67) ~[palo-fe.jar:3.4.0]
        at org.apache.doris.qe.StmtExecutor.handleInsertStmt(StmtExecutor.java:847) [palo-fe.jar:3.4.0]
        at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:326) [palo-fe.jar:3.4.0]
        at org.apache.doris.qe.ConnectProcessor.proxyExecute(ConnectProcessor.java:483) [palo-fe.jar:3.4.0]
        at org.apache.doris.service.FrontendServiceImpl.forward(FrontendServiceImpl.java:655) [palo-fe.jar:3.4.0]
        at org.apache.doris.thrift.FrontendService$Processor$forward.getResult(FrontendService.java:1873) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.doris.thrift.FrontendService$Processor$forward.getResult(FrontendService.java:1858) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [spark-dpp-1.0.0.jar:1.0.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_281]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_281]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_281]
2021-06-16 14:07:20,267 WARN (thrift-server-pool-150|875) [StmtExecutor.execute():380] errors when abort txn
org.apache.doris.transaction.TransactionNotFoundException: errCode = 2, detailMessage = transaction not found
        at org.apache.doris.transaction.DatabaseTransactionMgr.abortTransaction(DatabaseTransactionMgr.java:949) ~[palo-fe.jar:3.4.0]
        at org.apache.doris.transaction.GlobalTransactionMgr.abortTransaction(GlobalTransactionMgr.java:210) ~[palo-fe.jar:3.4.0]
        at org.apache.doris.transaction.GlobalTransactionMgr.abortTransaction(GlobalTransactionMgr.java:205) ~[palo-fe.jar:3.4.0]
        at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:376) [palo-fe.jar:3.4.0]
        at org.apache.doris.qe.ConnectProcessor.proxyExecute(ConnectProcessor.java:483) [palo-fe.jar:3.4.0]
        at org.apache.doris.service.FrontendServiceImpl.forward(FrontendServiceImpl.java:655) [palo-fe.jar:3.4.0]
        at org.apache.doris.thrift.FrontendService$Processor$forward.getResult(FrontendService.java:1873) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.doris.thrift.FrontendService$Processor$forward.getResult(FrontendService.java:1858) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [spark-dpp-1.0.0.jar:1.0.0]
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [spark-dpp-1.0.0.jar:1.0.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_281]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_281]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_281]

0
#4willwang704回复于2021-06

我也碰到了这个问题用的insert into方式写入的,300多列,8000多条数据,报错一摸一样,不知道什么原因

0
#3xcodeman回复于2021-06
#2 Ling缪回复
用的是什么导入方式?

Jdbc, batch 方式写入

0
#2Ling缪回复于2021-06

用的是什么导入方式?

0
快速回复
TOP
切换版块