-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setting up a Job vs. Creating a Job #46
Comments
so visiting Is the TMS supposed to recreate the structure of a TAPICC Job, after visiting this route? |
I’m not sure if we would need a job-level flag. Having the status of each task set to a value indicating the job is not submitted yet might be enough.
We do have two structures: the one of TAPICC and the one of the TMS. There is very few chances that they match completely. This is where we really need the input of developers who would implement TAPICC with their systems: We can’t imagine all the ways things could be done. At least I can’t: my experience of connecting CMS and TMS is limited to a few systems on each side and likely rather small compare to all the systems out there. Maybe an example would help here: In Argos one of the TMSes we use works the following way: The top unit is a “project”. A given project can have one or more “batch”. A batch is a set of identical source files with a single source language and one or more targets, and going through one given workflow. We can construct a batch is several steps (file by file if needed, add targets, etc.) but after that all processes are usually done for all files at once or at least by language pairs. So, let’s say we get a TAPICC job like this:
We have to re-structure the 7 tasks of the TAPICC data into something like this:
So we would associate each of the items in the batches with a task, and just work through that link as needed. |
Sorry, I feel lost here, can you please answer my questions so I can see it cleaner?
|
It is send by the creator of the job, when all the job’s components (assets and tasks) have been uploaded and created. It would work roughly this way:
If by migration you mean whatever process needs to happened to make the TAPICC job’s data a “real” job for the TMS, the answer is: the submit call.
I’m talking about the Task.progress: it would probably need a value to indicate that a task is not yet “ready” to be processed (maybe I only mentioned a possible status at the job level because we don’t have currently a direct way to know a job has been submitted or not (the only way would be to look at the task.progress values, and/or whether it has any assets and tasks at all). I hope this helps. |
Thanks, I believe it helped me to understand better.
What if we ignore Task.progress in this matter, and we would create a new attribute Job.submittedAt which would be a date-time. (We had this property, but I deleted it, because I though it was the same thing as createdAt.) But from now, this property would be filled with date-time after the job is submit with the api endpoint. Then we can create another boolean attribute called Job.changedSinceLastSubmit |
I'm afraid I'm not sure I understand the webhook purpose. To me a webhook is a callback URL that a client of a TAPICC server sets in the TAPICC server, so when an event occurs, that client is notified.
A
I guess that goes back to the discussion about how to do updates. |
The idea was, that the TAPICC would send a webhook to the TMS (or some middleware between TAPICC and TMS for example zapier.com) so that the TMS can recreate the structure in it's database or do whatever it wants, by acting on the webhook (the TAPICC webhook would send all required data to the TMS). But maybe it's bad idea and we shouldn't expect the TMS systems to do this extra effort of supporting this TAPICC webhook. But rather a specific TAPICC implementation should adapt to the API of the TMS to recreate the structure in the TMS. In other words, I think there are 3 options how the TMS gets the structure from TAPICC created:
Yes, it's true that one would need to also access the Job. But what about Assets? Do we need this information also on Assets or only on Tasks? |
Sorry I am tired, I think I missed that part where you say TAPICC is the TMS. I though we needed to send job data from TAPICC server to some other server (TMS) which might have different data structure (the batches example you used before). |
I see. then yes, what you said would have made sense. |
In your very first post, does TMS mean TAPICC or something else? |
TMS means a normal TMS, but it includes also a TAPICC server component. How exactly are they working together is up to the implementer. So, yes, TAPICC is a "bridge" between the CMS and the TMS in the sense that the CMS is the TAPICC-client while the TMS includes the TAPICC-server. |
I got lost a bit, so do we need a |
Since I'm starting back to look at doing a TAPICC implementation, I'll try to followup on this, which is still not resolved as far as I understand. The |
from today's meeting
|
I have to re-open this issue because I found a case where knowing at the task level that the task is ready is not enough:
This is a fairly typical project, even probably the main use case for us. Then our internal system would very much like to treat those three tasks within a single project. It makes no sense for us to have three separate projects for this. The problem: How do we know the client is done with submitting tasks? |
I'm attaching a discussion thread that took place on Skype between Yves, Alex, Wojciech and myself where the conclusion of the discussion seems to be: Let's add a property to the Job object that would indicate if a Job is an "open" or "sealed" package. The "work protocol"/use case/model we're trying to support is the "classic localization project" where none of the tasks within the job should be executed until all tasks are present. This is opposed to the "continuous localization" model where tasks should be executed as soon as they are assigned, with no dependency on any other task. The "open/sealed" analogy treats the Job like a package. If it is "open," then any number of tasks can be added to it for as long as it remains open. If it is "sealed" then whatever tasks are in there are the tasks that represent the Job. |
Looking more at this issue, I'm not sure a new property with open/sealed value at the job level corresponds exactly to what we would need. It seems to assume the job cannot be changed anymore. But in reality we are just trying to set an trigger to a batch of posts. |
I think we may still need that
/jobs/{jobId}/submit
command we had a while back.Creating a job is one POST call, but then we need to upload all the assets for that job and then create all the tasks for each assets. We will likely need a way to indicate to the TMS-side that we are done with adding assets and tasks for a given job. A bit like staging the job and then committing it.
Otherwise the TMS-side may have hard time organizing its own structure. For example, the TMS may need to create one separate project for each set of files with the same source language. Another case may be if the TMS needs to make groups files per language-pairs. Etc.
In other words, the TMS may need to known about the whole job before it can actually start creating its internal structures.
The steps of setting up a job would be something like this:
This also means that we may need some value indicating that the
progress
of the tasks is not even started yet (while the job setup is being done). Something maybe likenew
ornot-submitted-yet
.The text was updated successfully, but these errors were encountered: