-
Notifications
You must be signed in to change notification settings - Fork 599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Lazy restore tends to restore all pages rather than those pages that really touched ? #2399
Comments
Adrian has a good blog post on how this could be achieved: |
Thanks for your reply. This blog is fairly good but I think there are ambiguities in my description. I said 'touch' here means 'access' rather than 'modify'. What I mean is that is it possible to restore pages accessed in phase 2 only. These pages are not necessarily dirty. In the blog you mentioned above, Adrian pre-dump all pages, in my case, all pages allocated in phase 1 and phase 2, then dump dirty pages modified in phase 2, then restored all pages pre-dumped to /tmp/cp/1, then lazily restore pages dumped to /tmp/cp/2. This does not look like lazy restore to me, since it restores all pages dumped by pre-dump phase. |
Because it was designed for lazy-live-migration. The behavior that you expects can be easy implemented. How are you going to use it? What profits do you see in this use-case? |
Could you give me some idea ? I have some naive idea on it:
For many applications (serverless applications for example), their memory working set size goes through an inflated-deflated phase. That is, during initialization phase (loading lots of libraries), they access much more pages than serving phase (just a rpc server sitting there waiting for requests). A big fraction of pages accessed in init phase (I call it cold pages) are rarely accessed in serving phase, so I want to store away those cold pages on disk, only restore hot pages (accessed in serving phase). Cold pages can be brought into memory on demand (use page faults). I think this can save memory and accelerate restoration. |
Hi, I used to work on using CRIU to implement the "lazy migration" you describe for serverless functions. I also noticed that the current lazy migration in CRIU is actually lazy-live-migration, not lazy restore. From my understanding, I think your current idea is to track hot pages and prefetch them, and then use userfaultfd to load other cold pages. I would recommend looking at a paper (https://dl.acm.org/doi/pdf/10.1145/3445814.3446714) where the authors have implemented a similar idea on vHive. I hope their design can help you develop your idea. |
In fact I've already read vHive and this idea is exactly inspired by it. VHive works on firecracker VM whose checkpoint is dumped from anonymous memory of VM monitor process, thus it contains full data of VM memory, resulting in an easier implementation to REcord-And-Replay. But CRIU works on process, whose checkpoint only contains private data(annonymous and dirty file-backed pages) of process. That's why I need to track and dump not only private data but all accessed pages. But thanks for your kindness. |
And I still wonder if developers of CRIU have an interest in supporting true "lazy restore".
Although I'm interested and I think it's useful, It's a bit difficult for me to implement. |
@LanYuqiao, thank you for clarifying your use case. As Andrei mentioned, the original implementation was designed for live migration. In this scenario, we have residual dependencies between the source and destination machines. In particular, we need to make sure that all pages are restored because if the source machine becomes unavailable (e.g., due to system failure), the restored application would fail.
It should be fairly easy to modify CRIU to restore memory pages only when a page fault occurs. However, this will result in poor performance for the restored application. Loading a memory page from disk (or over the network) is significantly slower compared to direct access from memory. This could be observed in the following demo: https://asciinema.org/a/4QgtYPW9XtTngTyCX5Jsibqth (the application is significantly slower for a few seconds after restore). What is the main problem you are trying to solve? Why do you want pages to be restored only when accessed in phase 3? |
In my case, the numbers of pages accessed in phase 2 and phase 3 are much smaller than that in phase 1. There are quite a large part of pages accessed in phase 1 will not be accessed anymore, which means we don't need to restore all pages to run the app, we only restore those pages accessed. Restoring all pages will be a waste of memory and time. |
For example, an application works like this:
When this application comes to phase 2, checkpoint it, then lazy restore it. It seems like all pages are restored rather than that only pages touched in phase 2 are restored.
Is that possible to achieve that only restore pages touched in phase 2, and lazily restore pages touched in phase 3 ? I think this is true lazy restore
The text was updated successfully, but these errors were encountered: