Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting up a Job vs. Creating a Job #46

Open
ysavourel opened this issue Aug 28, 2018 · 17 comments
Open

Setting up a Job vs. Creating a Job #46

ysavourel opened this issue Aug 28, 2018 · 17 comments

Comments

@ysavourel
Copy link

I think we may still need that /jobs/{jobId}/submit command we had a while back.

Creating a job is one POST call, but then we need to upload all the assets for that job and then create all the tasks for each assets. We will likely need a way to indicate to the TMS-side that we are done with adding assets and tasks for a given job. A bit like staging the job and then committing it.

Otherwise the TMS-side may have hard time organizing its own structure. For example, the TMS may need to create one separate project for each set of files with the same source language. Another case may be if the TMS needs to make groups files per language-pairs. Etc.

In other words, the TMS may need to known about the whole job before it can actually start creating its internal structures.

The steps of setting up a job would be something like this:

POST /jobs (get back the job ID)
for each asset in the job:
    POST /jobs/{jobId}/assets/uploadFile (get back the asset ID)
    for each target language for that asset:
        POST /assets/{assetId}/tasks (get back the task ID)
POST /jobs/{jobId}/submit (to commit the job)

This also means that we may need some value indicating that the progress of the tasks is not even started yet (while the job setup is being done). Something maybe like new or not-submitted-yet.

@Alino
Copy link
Member

Alino commented Aug 28, 2018

so visiting GET /jobs/{jobId}/submitwould change some property of a Job to mark it as submitted?
Would that be locking the Job, so it would not be possible to modify it anymore and not possible to create/update Assets in it?
Or can the Job be resubmit with the same route?

Is the TMS supposed to recreate the structure of a TAPICC Job, after visiting this route?
Do you have an idea how would that work? Via a webhook?

@ysavourel
Copy link
Author

so visiting GET /jobs/{jobId}/submitwould change some property of a Job to mark it as submitted?
Would that be locking the Job, so it would not be possible to modify it anymore and not possible to create/update Assets in it?
Or can the Job be resubmit with the same route?

I’m not sure if we would need a job-level flag. Having the status of each task set to a value indicating the job is not submitted yet might be enough.
As for updates, I guess that goes back to issue #41. I don’t think how we create the job itself affects how you can update it. But it’s a good point: we have not really discussed how updates (or lack thereof) would be done.

Is the TMS supposed to recreate the structure of a TAPICC Job, after visiting this route?
Do you have an idea how would that work? Via a webhook?

We do have two structures: the one of TAPICC and the one of the TMS. There is very few chances that they match completely.
So most likely we have to keep track of the TAPICC data (at least some of them) separately from the TMS structure. But from the viewpoint of the TAPICC client that should be transparent: For example, the call to get the status of a translation task for a given file for a given target is simply routed to whatever structure the TMS uses. I don’t think you need to re-convert to TAPICC, or use webhooks for all that.

This is where we really need the input of developers who would implement TAPICC with their systems: We can’t imagine all the ways things could be done. At least I can’t: my experience of connecting CMS and TMS is limited to a few systems on each side and likely rather small compare to all the systems out there.

Maybe an example would help here:

In Argos one of the TMSes we use works the following way: The top unit is a “project”. A given project can have one or more “batch”. A batch is a set of identical source files with a single source language and one or more targets, and going through one given workflow. We can construct a batch is several steps (file by file if needed, add targets, etc.) but after that all processes are usually done for all files at once or at least by language pairs.

So, let’s say we get a TAPICC job like this:

  • Asset-11 (Source=EN)
    • Task-111 (Type=translation, Target=FR)
    • Task-112 (Type=translation, Target=DE)
    • Task-113 (Type=translation, Target=PL)
  • Asset-12 (Source=EN)
    • Task-121 (Type=translation, Target=FR)
    • Task-122 (Type=translation, Target=DE)
    • Task-123 (type=translation, Target=JA)
  • Asset-13 (Source=DE)
    • Task-131 (Type=translation, Target=JA)

We have to re-structure the 7 tasks of the TAPICC data into something like this:

  • Batch-1A (Source=EN)
    • Target=FR
      • Asset-11 (Task-111)
      • Asset-12 (Task-121)
    • Target=DE
      • Asset-11 (Task-112)
      • Asset-12 (Task-122)
  • Batch-1B (Source=EN)
    • Target=PL
      • Asset-11 (Task-113)
  • Batch-1C (Source=EN)
    • Target=JA
      • Asset-12 (Task-123)
  • Batch-1D (Source=JA)
    • Target=DE
      • Asset-13 (Task-131)

So we would associate each of the items in the batches with a task, and just work through that link as needed.

@Alino
Copy link
Member

Alino commented Sep 6, 2018

Sorry, I feel lost here, can you please answer my questions so I can see it cleaner?

  1. What exactly should be the function of this endpoint? /job/{jobId}/submit
  2. What should trigger the migration of Job data from TAPICC to the TMS?

I’m not sure if we would need a job-level flag. Having the status of each task set to a value indicating the job is not submitted yet might be enough.

  • Are you talking about an already existing Task.progress attribute, or is it some new Task.status attribute which should be created?
  • What is the benefit of having this value on Task level, instead of Job level?

@ysavourel
Copy link
Author

What exactly should be the function of this endpoint? /job/{jobId}/submit

It is send by the creator of the job, when all the job’s components (assets and tasks) have been uploaded and created. POST /jobs starts the process of creating the job, and POST /jobs/{jobId}/submit concludes it.

It would work roughly this way:

  • The TAPICC client sends a POST /jobs – That creates a new job (but it has no assets and no tasks yet), and the new job’s ID is sent in the response.
  • Then the client uploads assets and creates tasks as needed
    (e.g. 10 files to translate from EN to PL, DE and RU, that’s 10 uploads and 30 tasks to create: 40 calls)
  • Then the client concludes with a POST /jobs/{jobId}/submit so the server now knows there are no more data coming for that job and it can move on to create the corresponds structures/whatever for that new project in the backend.

What should trigger the migration of Job data from TAPICC to the TMS?

If by migration you mean whatever process needs to happened to make the TAPICC job’s data a “real” job for the TMS, the answer is: the submit call.
If some systems do not need such trigger and can handle the bits and pieces of the new job as they arrive, good for them. They can just ignore the submit call then.

I’m not sure if we would need a job-level flag. Having the status of each task set to a value indicating the job is not submitted yet might be enough.

Are you talking about an already existing Task.progress attribute, or is it some new Task.status attribute which should be created?
What is the benefit of having this value on Task level, instead of Job level?

I’m talking about the Task.progress: it would probably need a value to indicate that a task is not yet “ready” to be processed (maybe new) and then the that progress field would be set to pending when the server receives the submit call.

I only mentioned a possible status at the job level because we don’t have currently a direct way to know a job has been submitted or not (the only way would be to look at the task.progress values, and/or whether it has any assets and tasks at all).

I hope this helps.

@Alino
Copy link
Member

Alino commented Sep 6, 2018

Thanks, I believe it helped me to understand better.

If by migration you mean whatever process needs to happened to make the TAPICC job’s data a “real” job for the TMS, the answer is: the submit call.
If some systems do not need such trigger and can handle the bits and pieces of the new job as they arrive, good for them. They can just ignore the submit call then.

  • So, we will probably need to have a webhook event associated with job submit, let's call it "JobSubmitted" event.
    When the job submit endpoint is opened, it should trigger this webhook so that the TMS can obtain all Job data, including Assets and Tasks. So that it can recreate this structure in itself, as it needs.
    (The TMS would have to be compatible with TAPICC, in such a way it would need to be able to create the structure from the webhook)
  • Or alternatively, some TAPICC implementation could do this after the job is submit, by using the API of the TMS to recreate the structure there as it should be.

I’m talking about the Task.progress: it would probably need a value to indicate that a task is not yet “ready” to be processed (maybe new) and then the that progress field would be set to pending when the server receives the submit call.
I only mentioned a possible status at the job level because we don’t have currently a direct way to know a job has been submitted or not (the only way would be to look at the task.progress values, and/or whether it has any assets and tasks at all).

What if we ignore Task.progress in this matter, and we would create a new attribute Job.submittedAt which would be a date-time. (We had this property, but I deleted it, because I though it was the same thing as createdAt.) But from now, this property would be filled with date-time after the job is submit with the api endpoint.

Then we can create another boolean attribute called Job.changedSinceLastSubmit
This would be set to false by default, but as soon as there is a modification done to the Job object, or any associated object such as Asset or Task (creation, deletion, modification), it would be set to true.
This way we would be aware if something has changed in the Job data, since it was last submitted.

@ysavourel
Copy link
Author

So, we will probably need to have a webhook event associated with job submit, let's call it "JobSubmitted" event.
When the job submit endpoint is opened, it should trigger this webhook so that the TMS can obtain all Job data, including Assets and Tasks. So that it can recreate this structure in itself, as it needs.
(The TMS would have to be compatible with TAPICC, in such a way it would need to be able to create the structure from the webhook)

I'm afraid I'm not sure I understand the webhook purpose. To me a webhook is a callback URL that a client of a TAPICC server sets in the TAPICC server, so when an event occurs, that client is notified.
So, in the scenario of a CMS (the client) creating a job in the TMS (the TAPICC server), I don't understand why the submit call would trigger a webhook for the TMS. The TAPICC server doesn't need to have webhook for itself. It can just act when it receive the submit... Or I'm missing something...

What if we ignore Task.progress in this matter, and we would create a new attribute Job.submittedAt which would be a date-time. (We had this property, but I deleted it, because I though it was the same thing as createdAt.) But from now, this property would be filled with date-time after the job is submit with the api endpoint.

A Job.submittedAt would be fine. But I'm not sure this can replace a "new"-like value for Task.progress. Not having a distinct indicator at the task level for tasks not ready yet would make detecting the "status" of the task complicated (one would need to also access the job).

Then we can create another boolean attribute called Job.changedSinceLastSubmit

I guess that goes back to the discussion about how to do updates.

@Alino
Copy link
Member

Alino commented Sep 6, 2018

I'm afraid I'm not sure I understand the webhook purpose. To me a webhook is a callback URL that a client of a TAPICC server sets in the TAPICC server, so when an event occurs, that client is notified.
So, in the scenario of a CMS (the client) creating a job in the TMS (the TAPICC server), I don't understand why the submit call would trigger a webhook for the TMS. The TAPICC server doesn't need to have webhook for itself. It can just act when it receive the submit... Or I'm missing something...

The idea was, that the TAPICC would send a webhook to the TMS (or some middleware between TAPICC and TMS for example zapier.com) so that the TMS can recreate the structure in it's database or do whatever it wants, by acting on the webhook (the TAPICC webhook would send all required data to the TMS). But maybe it's bad idea and we shouldn't expect the TMS systems to do this extra effort of supporting this TAPICC webhook. But rather a specific TAPICC implementation should adapt to the API of the TMS to recreate the structure in the TMS.

In other words, I think there are 3 options how the TMS gets the structure from TAPICC created:

  1. TAPICC webhook sent to middleware something like zapier.com
  2. TAPICC webhook sent directly to TMS
  3. TAPICC implementation uses the API of the TMS

A Job.submittedAt would be fine. But I'm not sure this can replace a "new"-like value for Task.progress. Not having a distinct indicator at the task level for tasks not ready yet would make detecting the "status" of the task complicated (one would need to also access the job).

Yes, it's true that one would need to also access the Job. But what about Assets? Do we need this information also on Assets or only on Tasks?
For me it seems easier to reason about all of these objects if they were submitted, if their parent object (Job) has the indicator if it was submitted.
In what scenario does someone need to know if the Job of a Task has been submitted?

@Alino
Copy link
Member

Alino commented Sep 6, 2018

Sorry I am tired, I think I missed that part where you say TAPICC is the TMS. I though we needed to send job data from TAPICC server to some other server (TMS) which might have different data structure (the batches example you used before).

@ysavourel
Copy link
Author

I see. then yes, what you said would have made sense.
I guess we are back on the same page then.

@Alino
Copy link
Member

Alino commented Sep 6, 2018

In your very first post, does TMS mean TAPICC or something else?
Probably I am also confused with our terminology,
I though TAPICC is not a TMS if I remember correctly from our last group call, but a bridge between TMS or/and CMS?

@ysavourel
Copy link
Author

TMS means a normal TMS, but it includes also a TAPICC server component. How exactly are they working together is up to the implementer.

So, yes, TAPICC is a "bridge" between the CMS and the TMS in the sense that the CMS is the TAPICC-client while the TMS includes the TAPICC-server.

@mesztam
Copy link

mesztam commented Feb 21, 2019

I got lost a bit, so do we need a /submit action or e.g. a specific job state or anything else to trigger the TMS to start the translation job?
Or the job is started as soon as the child tasks/assets are fully set up? (How do we indicate if a task is ready to start?)
Or this should be fully async, and the different tasks start as soon as their input is in place, while others are still being set up? (In some TMS systems' APIs this is not possible, so you cannot extend an existing project with new inputs once the project is started)

@ysavourel
Copy link
Author

Since I'm starting back to look at doing a TAPICC implementation, I'll try to followup on this, which is still not resolved as far as I understand.

The /submit would let know the TAPICC-server (where the job is created) that there is no more input or tasks to be associated with the job being created. So, yes, in a sense it is a change of status for the job.
Note that the client would send that call only when it received back the results for all the tasks and input creation call it sent.

@Alino
Copy link
Member

Alino commented Sep 10, 2019

from today's meeting

It looks like we do not need that "submission is done" call, because a Job can be open forever.

@Alino Alino closed this as completed Sep 10, 2019
@ysavourel
Copy link
Author

I have to re-open this issue because I found a case where knowing at the task level that the task is ready is not enough:

  • The TAPICC client is creating a job,
  • It submits a bunch of inputs,
  • And then creates 3 tasks: translation into FR, translation into DE and translation into JA with all tasks using the same list of input files

This is a fairly typical project, even probably the main use case for us.

Then our internal system would very much like to treat those three tasks within a single project. It makes no sense for us to have three separate projects for this.

The problem: How do we know the client is done with submitting tasks?

@Alino Alino reopened this Oct 1, 2019
@jcompton-moravia
Copy link

jcompton-moravia commented Oct 7, 2019

I'm attaching a discussion thread that took place on Skype between Yves, Alex, Wojciech and myself where the conclusion of the discussion seems to be: Let's add a property to the Job object that would indicate if a Job is an "open" or "sealed" package. The "work protocol"/use case/model we're trying to support is the "classic localization project" where none of the tasks within the job should be executed until all tasks are present. This is opposed to the "continuous localization" model where tasks should be executed as soon as they are assigned, with no dependency on any other task. The "open/sealed" analogy treats the Job like a package. If it is "open," then any number of tasks can be added to it for as long as it remains open. If it is "sealed" then whatever tasks are in there are the tasks that represent the Job.
Skype Transcript about Jobs.txt

@ysavourel
Copy link
Author

Looking more at this issue, I'm not sure a new property with open/sealed value at the job level corresponds exactly to what we would need. It seems to assume the job cannot be changed anymore. But in reality we are just trying to set an trigger to a batch of posts.
Early on I believe Alex was proposing a specific endpoint for the given job: when called it means the submitter is done submitting and the host can start the tasks (if it needed to wait for that signal). Hosts that can start each task as they come can simple ignore that request.
This would allow the requester to add tasks later on if needed. It doesn't "close" the job.
The issue is more about a process than a state.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants