-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation fault under filebench workloads #44
Comments
Hello, Thanks for trying out SplitFS. I am not sure about what is causing the problem at your end. I have modified the filebench_run.sh script file to ensure that the correct environment variables are being set, and also modified the workload files to match them with the filebench repository's workload files (except for webserver and webproxy). Could you pull and check again? Also, please make sure you do the following things:
If it runs correctly, you should see around 2-3x improvement in SplitFS as compared to ext4 DAX in varmail and fileserver. I am seeing the same on my end, when I tested the performance on Fedora 30 with the same workload files, with 96 logical CPUs. |
Thanks for your reply.
|
Sorry, it seems that the filebench core dump problem is caused by modifying NVP_NUM_LOCKS in nvp_lock.h. |
Thank you for this insight. I will look into why the core dump problem is coming up if we run with more cores on 5.1. The problem does not seem to be fundamental to SplitFS. Just to confirm, is the performance of SplitFS better than ext4-DAX on Linux 5.1 when you run with cores 0-15? I am not sure why Filebench crashes directly when running on NOVA and PMFS. We don't modify NOVA and PMFS, and might be an issue with your kernel. |
I failed to port SplitFS to Linux5.1 because ext4 code has been changed in Linux 5.1 |
Environment: Splitfs, Optane DC PM, Ubuntu 18.04 LTS, glibc 2.27, gcc 7.5.0
When I run the filebench workloads (varmail, fileserver, webserver, webproxy) using the scripts/filebench/run_fs.sh, it always gets Segmentation fault (core dumped).
Although varmail, fileserver and webproxy can still complete the tests and show results, the results are doubtful because the performance is significant lower than that of ext4-dax.
Webserver generally crash immediately...
It seems there is nothing to do with the workload data size...
I setup Splitfs exactly following the steps, I only change the NVP_NUM_LOCKS from 32 to 144 because my machine has 72 logical CPUs.
The text was updated successfully, but these errors were encountered: