Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When I write data throw java.lang.Exception: Batch 'xxxxxxxxxxx' failed with error 'InvalidBatch : Records not found' #71

Open
newforesee opened this issue Jun 23, 2021 · 0 comments

Comments

@newforesee
Copy link

I use the following method to write data to Salesforce. Every time the write is completed, an exception will be thrown but the data seems to be written.

def save2Salesforce(df: DataFrame): Unit = {
    df.write
      .format("com.springml.spark.salesforce")
      .option("username", "myusername")
      .option("password", "mypassword")
      .option("login", "https://test.salesforce.com/")
      .option("upsert", true)
      .option("sfObject", "OrderTemp__c").
      save()
  }

The Exception :

Exception in thread "main" java.lang.Exception: Batch '7512i000004KPxyAAG' failed with error 'InvalidBatch : Records not found'
	at com.springml.salesforce.wave.impl.BulkAPIImpl.isCompleted(BulkAPIImpl.java:93)
	at com.springml.spark.salesforce.SFObjectWriter.writeData(SFObjectWriter.scala:44)
	at com.springml.spark.salesforce.DefaultSource.updateSalesforceObject(DefaultSource.scala:154)
	at com.springml.spark.salesforce.DefaultSource.createRelation(DefaultSource.scala:130)
	at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)

Will this Exception affect my data? Can it be ignored? If I just ignore whether there is any risk?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant