You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description:
When I use the SCDF API and use the jobs/instances endpoint to consult the status of a jobInstance, it returns empty values and confusing data about the jobExecutions it has, like, for example, a never-ending STARTING status, and null or empty date stamps.
Weird thing is, when I use the jobs/executions endpoint to check the jobExecution data, all data comes correctly, and when I query manually against the database the same query used by SCDF to return the values of the jobs/instances endpoint, it returns the data correctly.
So I think it may be some export data error that's not working properly.
Release versions:
Steps to reproduce:
Execute a SCDF Server with a PostgreSQL database, and launch a task with a job. Then, make use of the SCDF API to query the data of the job instances and check the data obtained.
Additional context:
Additional info:
The data queried from the database, exactly the same way SCDF does it (literally copied the query and pasted it into the DBeaver):
SELECT I.JOB_INSTANCE_ID as JOB_INSTANCE_ID, I.JOB_NAME as JOB_NAME, I.SCHEMA_TARGET as SCHEMA_TARGET, E.JOB_EXECUTION_ID as JOB_EXECUTION_ID, E.START_TIME as START_TIME, E.END_TIME as END_TIME, E.STATUS as STATUS, E.EXIT_CODE as EXIT_CODE, E.EXIT_MESSAGE as EXIT_MESSAGE, E.CREATE_TIME as CREATE_TIME, E.LAST_UPDATED as LAST_UPDATED, E.VERSION as VERSION, T.TASK_EXECUTION_ID as TASK_EXECUTION_ID, (SELECT COUNT(*) FROM AGGREGATE_STEP_EXECUTION S WHERE S.JOB_EXECUTION_ID = E.JOB_EXECUTION_ID AND S.SCHEMA_TARGET = E.SCHEMA_TARGET) as STEP_COUNT from AGGREGATE_JOB_INSTANCE I JOIN AGGREGATE_JOB_EXECUTION E ON I.JOB_INSTANCE_ID = E.JOB_INSTANCE_ID AND I.SCHEMA_TARGET = E.SCHEMA_TARGET LEFT OUTER JOIN AGGREGATE_TASK_BATCH TT ON E.JOB_EXECUTION_ID = TT.JOB_EXECUTION_ID AND E.SCHEMA_TARGET = TT.SCHEMA_TARGET LEFT OUTER JOIN AGGREGATE_TASK_EXECUTION T ON TT.TASK_EXECUTION_ID = T.TASK_EXECUTION_ID AND TT.SCHEMA_TARGET = T.SCHEMA_TARGET where I.JOB_INSTANCE_ID = 2 AND I.SCHEMA_TARGET = 'boot3';
The data obtained from the database. The JobExecutionId is 2:
The JSON obtained from the cUrl to the SCDF Server with the incorrect data:
Here in the JSON, you can see the data that it returns, but the same query against the /jobs/executions endpoint returns the correct data for the jobExecution:
* Ensure sub-query use for Job instances.
Fix find by name query ordering.
Fix find by name not found query.
Improve test case readability.
* Fix imports and spacing.
Removed .andDo(print())
* Fixed wildcard imports.
#5484
Description:
When I use the SCDF API and use the jobs/instances endpoint to consult the status of a jobInstance, it returns empty values and confusing data about the jobExecutions it has, like, for example, a never-ending STARTING status, and null or empty date stamps.
Weird thing is, when I use the jobs/executions endpoint to check the jobExecution data, all data comes correctly, and when I query manually against the database the same query used by SCDF to return the values of the jobs/instances endpoint, it returns the data correctly.
So I think it may be some export data error that's not working properly.
Release versions:
Steps to reproduce:
Execute a SCDF Server with a PostgreSQL database, and launch a task with a job. Then, make use of the SCDF API to query the data of the job instances and check the data obtained.
Additional context:
Additional info:
The data queried from the database, exactly the same way SCDF does it (literally copied the query and pasted it into the DBeaver):
The data obtained from the database. The JobExecutionId is 2:
The JSON obtained from the cUrl to the SCDF Server with the incorrect data:
The endpoint:
The obtained JSON:
Here in the JSON, you can see the data that it returns, but the same query against the /jobs/executions endpoint returns the correct data for the jobExecution:
The text was updated successfully, but these errors were encountered: