You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the task input is a file. This does not work well with the bucket-idiom introduced with the API. Also, for downloaded payloads it currently blocks the interface.
We need to refactor the task (as well as web and API views) to accept a uniform payload identifier as input for a task. It then can be used to trigger further actions. For example if the payload id is bucket://1234 then a folder in the storage backend named 123 is used and analyzed, model identified, etc.. If url is specified download into a bucket should occur, etc.
This input resource identifier als allows for easy chaining of tasks. For exmaple it could first be passed as input to a task which determines that it needs to download a file first, which then is downloaded, decompressed into a bucket and the actual processing task is executed with that newly created bucket URI. etc. etc.
We need to define a standard for output format of a task, so this chaining/onion model is possible.
The text was updated successfully, but these errors were encountered:
Currently the task input is a file. This does not work well with the bucket-idiom introduced with the API. Also, for downloaded payloads it currently blocks the interface.
We need to refactor the task (as well as web and API views) to accept a uniform payload identifier as input for a task. It then can be used to trigger further actions. For example if the payload id is bucket://1234 then a folder in the storage backend named 123 is used and analyzed, model identified, etc.. If url is specified download into a bucket should occur, etc.
This input resource identifier als allows for easy chaining of tasks. For exmaple it could first be passed as input to a task which determines that it needs to download a file first, which then is downloaded, decompressed into a bucket and the actual processing task is executed with that newly created bucket URI. etc. etc.
We need to define a standard for output format of a task, so this chaining/onion model is possible.
The text was updated successfully, but these errors were encountered: