You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As data volumes grow it may become infeasible to store datasets on personal machines or in the cloud. This makes the common model in which data is transferred over the network to/from a processing environment more challenging if not impossible. Researchers may opt to park datasets directly on a cluster or supercomputer.
We should allow a user with access to a given cluster to address and annotate datasets directly on its filesystem.
This raises a few implementation questions:
do we need a periodic task to check for changes?
how to handle removal of files/folders?
We could also consider providing a link to automatically open a Jupyter notebook via SSH tunnel in a given workflow's container environment, rather than submitting as a batch job.
The text was updated successfully, but these errors were encountered:
As data volumes grow it may become infeasible to store datasets on personal machines or in the cloud. This makes the common model in which data is transferred over the network to/from a processing environment more challenging if not impossible. Researchers may opt to park datasets directly on a cluster or supercomputer.
We should allow a user with access to a given cluster to address and annotate datasets directly on its filesystem.
This raises a few implementation questions:
We could also consider providing a link to automatically open a Jupyter notebook via SSH tunnel in a given workflow's container environment, rather than submitting as a batch job.
The text was updated successfully, but these errors were encountered: