-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modify Appleyard chopping #809
base: master
Are you sure you want to change the base?
Conversation
Restrict update of rs/rv, rsw/rvw and zfraction in the extended blackoil model by the saturation scaling factor from the Appleyard-chopping.
jenkins build this please |
benchmark please |
I have manually gone through the test failures and plotted significant deviations. Cases not shown have only minor changes. The significant differences points to changes in time-stepping that again affects the results. Some more testing on field models is needed before concluding if these changes improves the stability of the newton update, but the test models are ok IMO. |
I think it would be good to automate the process to evaluate the test failures. Currently this involves significant ~/workspace/opm/qsummary/build/qsummary \
flow+udq_wconprod/UDQ_WCONPROD. \
~/workspace/opm/opm-tests/udq_actionx/opm-simulation-reference/flow/UDQ_WCONPROD. \
-v WLPR:OPU02 @akva2 What do you think? Could the current test infrastructure be extended with such an workflow? |
Benchmark result overview:
View result details @ https://www.ytelses.com/opm/?page=result&id=2114 |
Yes, indeed. Manually going through all the test failures (very often, it is tens of them) is significant work. When time stepping changes, all the current jenkins comparison basically not sensible anymore. If the jenkins can help to plot all the relevant plots out, it will be a big step towards the right direction. |
It was suggested by @hnil to use fixed timesteps with no chopping for all feature tests, to avoid this quagmire. I think that is a good idea that could avoid a lot of extra work with tests that seemingly fail. An alternative that may be a bit weaker would be to only compare the solutions at the report steps. For this concrete PR, I very much want to say "go ahead" but the changes are large enough to make me a little nervous. Maybe for one or a few of the "most failing" cases you could take a reference run, extract the timesteps, then run using the PR with the fixed steps, and see if the difference is then significant? (If not, then the difference was only caused by different timestepping and we are good to go.) |
The difficulty is that how to determine |
I agree, and I did not intend this as a new general procedure, only as a way to assess the test failures here a bit better, at the same time seeing if the idea of fixing timesteps is workable. |
jenkins build this please |
jenkins build this please |
2 similar comments
jenkins build this please |
jenkins build this please |
benchmark please |
1 similar comment
benchmark please |
Restrict update of rs/rv, rsw/rvw and zfraction in the extended blackoil model by the saturation scaling factor from the Appleyard-chopping.
This is part 1 of #803