Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Google cloud run volume mounting #2133

Open
loveeklund-osttra opened this issue Dec 11, 2024 · 2 comments
Open

Google cloud run volume mounting #2133

loveeklund-osttra opened this issue Dec 11, 2024 · 2 comments
Labels
question Further information is requested

Comments

@loveeklund-osttra
Copy link

Documentation description

I run DLT in google cloudrun and have noticed when I load big tables it can get OOM even if it writes to files, as cloudrun doesn't have any "real" storage. What I've been doing instead is mounting a storage bucket and using pipeline_dir to direct the pipeline to use that as the directory. This seems to work well for me in the cases I've tested. But I've also seen that there are limitations with mounting a storage bucket as a directory, listed here https://cloud.google.com/run/docs/configuring/jobs/cloud-storage-volume-mounts . It would be good to have someone who knows how DLT works under the hood take a look at this and see if these limitations might cause issues (For example if two or more processes/ threads would write to the same file etc). If the limitations wouldn't cause issues I think it would be nice to include a section about it here
https://dlthub.com/docs/walkthroughs/deploy-a-pipeline/deploy-with-google-cloud-run
to help other in the future.

Are you a dlt user?

Yes, I run dlt in production.

@rudolfix rudolfix added the question Further information is requested label Dec 12, 2024
@rudolfix
Copy link
Collaborator

@loveeklund-osttra dlt works correctly on mounted volumes. indeed there are limitations ie. lack of atomic renames. we are pretty sure we work with GCP mounted volumes: Composer maps data folder like this we support keeping working directories there.

lack of atomic renames may be a problem in certain cases. in your case AFAIK you'll start with always clean storage so you should not have problems with half commited files from previous runs.

you can also look at #2131 if you can read your data in the same order you'll be able to extract it chunk by chunk

@rudolfix rudolfix moved this from Todo to In Progress in dlt core library Dec 12, 2024
@loveeklund-osttra
Copy link
Author

Perfect, then I'll continue using it! Thanks for response!

I don't think it start with an empty folder if I just mount a volume, ( it uses same directory between runs and I can see previous runs content in storage after it is complete) but I call pipeline.drop() in the start anyways so shouldn't be a problem for me.

If you decide to add this into the documentation it can be worth mentioning that you probably want to up the rename-dir-limit . Otherwise it shows an error( I assume from when it tries to move files between directories), but complete anyways.

I did it like this in terraform

     volume_mounts {
          mount_path = "/usr/src/ingestion/pipeline_storage"
          name       = "pipeline_bucket"
        }
      volumes {
        name = "pipeline_bucket"
        gcs {
          bucket    = google_storage_bucket.dlt_pipeline_data_bucket.name
          read_only = false
          mount_options = [
            "rename-dir-limit=100000"
          ]
        }
      }

Also described here
https://cloud.google.com/run/docs/configuring/services/cloud-storage-volume-mounts .

Re #2131
Most of my data is from sql-databases / apis and I have experimented a bit there with chunking using limit. One thing noticed with that though is that it seems like it "recreates" the generator, so It will run the query again every time I call pipeline.run() . So I don't really know when to stop updating, I guess I could do something like loaded_rows < limit, but a more "inbuilt" way to do it would also be nice, but might be super difficult to actually implement (haven't looked into the code).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
Status: In Progress
Development

No branches or pull requests

2 participants