From d8c7d4a7855413d19a0ddacdaa67e440d8554afe Mon Sep 17 00:00:00 2001 From: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Sun, 12 Nov 2023 20:19:24 -0600 Subject: [PATCH] Cantera 3.0 Update (#10) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Heat Release Rate Added heat release rate as both an observable and in the sim explorer * Example Directory Changed and FirstTimeUse.docx made. * removing opt mech files * Made CheKiPEUQ_from_Frhodo helper module and also put code into fit_fcn Right now, these codes are not working. It is just the beginning. * get_varying_rate_vals_and_bnds * get_varying_rate_vals_and_bnds * pars_uncertainty_distribution code * minor changes * Update CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Adjusting observed_data and responses shapes for CheKiPEUQ * got the weightings multiplication to work after transposing and transposing back * typo correction * working on get_log_posterior_density and getting varying_rate_vals * forcing 'final' after minimization to be 'residual' * Separating Bayesian into 5 steps to simplify things for Travis * Update CheKiPEUQ_integration_notes.txt * Update CheKiPEUQ_integration_notes.txt * Update CiteSoft call for CheKiPEUQ * Disabling CiteSoft exportations for now * Update CheKiPEUQ_integration_notes.txt * Update CheKiPEUQ_from_Frhodo from the CheKiPEUQ_Integration branch * CheKiPEUQ_integration Merging from Ashi's branch Modified GUI to include variables needed for Bayesian optimization * Merge CheKiPEUQ local (#5) changing variable names in fit_fcn and separating the bayesian case into an if statement. Making single return for verbose versus normal case. Made CheKiPEUQ_from_Frhodo helper module and also put code into fit_fcn - Right now, these codes are not working. It is just the beginning. Create ExampleConfig.ini add comments to fit_fcn.py about what is fed in by "args_list" renaming obs to obs_sim in fit_fcn * Update fit_fcn.py * get_varying_rate_vals_and_bnds * get_varying_rate_vals_and_bnds * pars_uncertainty_distribution code * newOptimization field for creating PE_object. * Added "Force Bayesian" * minor changes * Update CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Adjusting observed_data and responses shapes for CheKiPEUQ * got the weightings multiplication to work after transposing and transposing back * typo correction * update get_last_obs_sim_interp * moved CheKiPEUQ_PE_object creation into time_adjust_func * Update fit_fcn.py * working on get_log_posterior_density and getting varying_rate_vals * Update fit_fcn.py * forcing 'final' after minimization to be 'residual' * switch to negative log P for objective_function_value * Trying to allow Bayesian way to get past the "QQ" error by feeding residual metrics. * trying 10**neg_logP in lieu of loss_scalar. * adding Bayesian_dict to make code easier to follow * Separating Bayesian into 5 steps to simplify things for Travis * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Create CiteSoftLocal.py * Update CiteSoft call for CheKiPEUQ * Disabling CiteSoft exportations for now * Update CheKiPEUQ_integration_notes.txt * adding CheKiPEUQ local * CheKiPEUQ_local * removing things not needed from CheKiPEUQ * try moving CheKiPEUQ_local * parmaeter_estimation class still not being found for CheKiPEUQ_local Specifically from inside CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Trying to use only CheKiPEUQ_local * Merge Fix Co-authored-by: Aditya Savara <39929571+AdityaSavara@users.noreply.github.com> * Bayesian Changes Changes to Optimization tab GUI Reverting some changes in fit_fcn now that CheKiPEUQ will be called after simulations. Prior naming was more correct * Minor Changes * making get_consolidated_parameters_arrays * Bayesian_dict parsing of initial guesses and bounds added * for params, deepcopy was needed, and added print lines. * Adding new opt settings Also spent a long time fixing the scientific spinbox * Fixing the ligical error for lower bound. I fixed the logical error, but line 212 is now printing this: line 212 [True, True, True] [0, 2.4424906541753446e-16, -1.7976931348623155e+288] -1.7976931348623155e+288 That means that the comparison is not working correctly for min_neg_system value. In the next commit, I'm going to use the 1E99 way of doing things. * cleanup & working towards inclusion of pars_bnds_exist * rate_constants_parameters_bnds seem to be parsed and passed correctly to CheKiPEUQ * Adding in unbounded_indices and return_unbounded_indices code * minor syntax fixes * Added remove_unbounded_values calls to code to truncate arrays etc. * Moved more of the Bayesian_Dict fields population to init * Cleaning up print statements * Fixing rate_constants_parameters_bnds_exist for multiple rate constants * Update CheKiPEUQ_integration_notes.txt * Cleaning up Boxes in uncertainty function now functional Also have further modified scientificspinbox * ChekiPEUQ: First attempt to implement variance per response for too long * Linear scaling working now, but some 'iterations' of Frhodo are freezing during CheKiPEUQ's PE_object init. I am trying to track down the reason. It is somewhere in the responses_observed_uncertainties manipulation during CheKiPEUQ's init. * Reduced the slowdown to the extent of not freezing I found that the slowdown was (surprisingly) caused in the loop that tried to convert non-zero weightings into tiny finite weightings. Changing the code to use machine epsilon in one of the steps rather than minValue/1E6 made that loop faster (completely sure why). The shape of the weightings looks odd. In the next commit I will try to print out some array shapes. * Fixed shape of weighting arrray passed to CheKiPEUQ Now need to do cleanup of print statements in next commit. * Removing excess print statements and cleaning up. * Improving heuristic slightly * Updating version number and fixing minor error from the recent edits. * Che ki peuq integration v3 (#8) * making get_consolidated_parameters_arrays * Bayesian_dict parsing of initial guesses and bounds added * for params, deepcopy was needed, and added print lines. * Fixing the ligical error for lower bound. I fixed the logical error, but line 212 is now printing this: line 212 [True, True, True] [0, 2.4424906541753446e-16, -1.7976931348623155e+288] -1.7976931348623155e+288 That means that the comparison is not working correctly for min_neg_system value. In the next commit, I'm going to use the 1E99 way of doing things. * Changed to -1E99 and +1E99 check SOMEWHAT SURPRISINGLY, THE COMPARISON IS STILL FAILING. -1E288 > -1E99 is returning True. * fixing comparisons: There was actually an "abs" I had not noticed. * extra space deleted * cleanup & working towards inclusion of pars_bnds_exist * rate_constants_parameters_bnds seem to be parsed and passed correctly to CheKiPEUQ * Adding in unbounded_indices and return_unbounded_indices code * minor syntax fixes * Added remove_unbounded_values calls to code to truncate arrays etc. * Moved more of the Bayesian_Dict fields population to init * Cleaning up print statements * Fixing rate_constants_parameters_bnds_exist for multiple rate constants * Update CheKiPEUQ_integration_notes.txt Co-authored-by: Travis Sikes <50559900+tsikes@users.noreply.github.com> * Remoing print statement from CheKiPUEQ_local * Style Changes * CheKiPEUQ changes CheKiPEUQ was checking bounds after they had already been enforced. Differing ways of checking bounds was causing CheKiPEUQ to throw -inf. CheKiPEUQ no longer checks bounds CheKiPEUQ obj_obj function no longer shows raw -1*log_posterior_density, but is instead the relative change from the initial guess. This should not alter convergence or how it runs, but it does make it easier to see how the value is changing. * Refactoring Refactoring to clean up fit_fcn Also changing imports slightly * Implemented Uncertainty in Observable Data Implemented GUI elements and linked to CheKiPEUQ. Uncertainty is %. Need to handle uncertainties better in opt. Log bayesian is broken * Added Shaded Unc Shading has a gradient, will likely remove for speed * Modifications to absolute uncertainty Added uncertainty type choice to saved variables Refactored OoM to convert_units * Implementing rate coef/bnds on Plogs and falloff eqns * Troe Optimization Backend mech structures created Arrhenius optimization functioning again Bug fix and mech.reset work Fixed mechanism double loading from setting use_thermo_file_box programmatically mech.reset now includes Plog and Falloff properly. More back end work on falloff/plogs Backend mech work Changing arrhenius coeffs/coef_bnds to be an item in a list to better match plog and falloff rates Arrhenius Optimization working again Arrhenius working Work to be done with Falloff Troe kind of working Troe kinda works Troe kinda works but it's very slow to fit the coefficients. I'm going to switch to SRI and hope for a faster convergence time to fitting the coefficients Working on SRI Fit Working on falloff Need to change mech to SRI Need to save mech as yaml for reverting back to later Working on Falloff Fits SRI fitting progress SRI fitting More SRI Fitting Working on nlopt SRI Working on SRI More SRI fun SRI - not much progress New SRI fit Fitting a,b then all Not going well Update base_plot.py Update base_plot.py SRI progress Troe Optimization and Full Update Updated Environment and bugs popped up Quick fixes to address those Squashed commit of the following: commit 7c78008561899297477be22b2d6789d8d9be2619 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jan 17 14:51:38 2022 -0600 Update main.py commit 9aee5641e96c5c94c7cf3398e1c8f6c6c113c738 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 13:02:15 2021 -0600 Version check bug fix commit 703d2253328b5f1e59a85e04cb57552a1d0a8c53 Merge: 0634337 7b12b8e Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 12:55:21 2021 -0600 Merge branch 'Troe_opt' of https://github.com/tsikes/Frhodo into Troe_opt commit 063433798bdf084fd79bb37c79b8783ace8375d3 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 12:55:18 2021 -0600 Moving Loss Partition Function TCK commit 7b12b8eceaa5f742e3453bd75d8caa3e69b8c703 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 11:47:03 2021 -0600 Working on Bug Fix New bug with RBFopt when using as executable. Working on fixing. (command window is flashing each iteration) commit 681051d2afc94aeba90101e686bab51d199d5d32 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Thu Nov 4 16:20:59 2021 -0500 Bug Fix Fixed CheKiPEUQ interface issues commit 56645923f1229a17d3f6ac50b4cdce3ebbb91580 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Nov 4 11:01:42 2021 -0500 Bug Fix Fixed bugs with secondary y axis in sim explorer. commit 78215871dec25a997e24ab2f4055356f78d6a3f7 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Wed Nov 3 19:46:04 2021 -0500 Update fit_fcn.py commit f9d11e87d980b7a428c61a5863ae02d5e84d98d9 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Wed Nov 3 18:09:27 2021 -0500 Optimization Bug Fixing Fixed setting rate parametrization constant uncertainties to zero for residual based method. commit 7bd4d647cb0e54afa40e3703bd490162f866e189 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Nov 2 15:52:28 2021 -0500 Minor changes Consolidated bisymlog scaling factor Added new % abs density gradient to options commit cf6328b0e9038f68a3a34f51a2d628d95dcd2912 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 1 16:22:59 2021 -0500 Bug Fixing Fixing bugs in explorer widget/base plot with limit setting and widget type not defined commit 15b2acb593cdd15b91c80744ee109665499b5c7f Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Oct 4 21:56:17 2021 -0500 Minor Update Fixed Torr added bisymlog to opt type Fixed plot error where left limit == right limit commit 943ad8e2950b9c0267a5af2176ce846a243b8aca Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Sep 23 16:36:40 2021 -0500 Update for New Tranter Exp Format New format does not have tOpt/PT spacing but instead gives velocity commit 0eb48ce2e16c69cf8387aadeb94f5a0af17bcc68 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Sun Sep 19 22:55:06 2021 -0500 Opt Update Moved bisymlog out of multiple locations into convert_units Changed calculate residuals log to bisymlog commit 5268062afa75b854fe0eb5e3e25cebd0e46b6b58 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Sep 16 15:35:24 2021 -0500 Changed Fit Function Changed fit function to mean instead of median Changed so that Bayesian doesn't include penalty function at end. Need to check commit 3df584d3ea1d0896d96aa4bcda3dc8072f4fd1c0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 22:15:05 2021 -0500 Update fit_fcn.py commit a507f938afdf23a5210739eab9eccd3a33b02dee Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 19:30:38 2021 -0500 Changed adaptive loss function Adaptive loss function now optimizes inside of rate optimization loop. It's much more efficient. It also means each individual experiment has it's own loss alpha commit bc66027fd2576fbfc3c403be1b6b3cd404d3d842 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 16:58:18 2021 -0500 Fixed usage of C in loss function/GUI commit 42d82ff555bb864bb430a756691a41597473fe9a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 18:26:13 2021 -0500 Modified loss function Removed prior scaling from loss function to bring it back to publication formula commit d0d43dc453e170d8f7e6b6d57ff63877f17aa27f Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 16:57:00 2021 -0500 Tinkering with generalized loss func commit 4702ef3d70e4aacedae1ce39b738bc88c9756acb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 15:51:28 2021 -0500 Changed to adaptive loss function The shape value is now broken into an optimized parameter for inside experiments and between experiments commit 60e782f43ddc6f473a3b7773167bdb46aaa950e2 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 23:54:22 2021 -0500 Update loss_integral.py commit d093dda5fe90a34d197a9842dcf2a63c8e8b7433 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 22:45:04 2021 -0500 loss integral fitting commit 774f70d4ae07539a1153fda94e827beb2e8b1199 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 17:41:35 2021 -0500 loss function changes commit cba7b9e5638efa58970aa93506218514e170c410 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 13:30:17 2021 -0500 Experiment import update Updated experimental conditions import to work for old Tranter style experiment files commit 016ea7ca63c4af2155d1964c1d01d982dfc308bb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 20 16:31:40 2021 -0500 Create fit_coeffs_pygmo.py commit b95d5a69e6c8784f2182f9d041dd55dd7aa24786 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 20 16:29:08 2021 -0500 Rollback Rolling back to working CRS2 Troe opt commit 3c577e96ed87fb79a47555acf06d3ebdb65ab541 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 19 14:59:26 2021 -0500 Testing Troe Opt commit 553887b648b4f9e6ff910abdbeea81ddfe44cfbb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 19 01:36:54 2021 -0500 Update fit_coeffs.py commit cbfaa20b587b940dc3620f599a3facb7dd1990f7 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Aug 18 16:28:28 2021 -0500 Working on implementing Augmented Lagrangian commit 520f822e5f61808140d047a8b9f5049ae9e667cc Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Aug 18 12:44:48 2021 -0500 Small changes bonmin path wasn't working with network path as string reduced min_T_range changing Troe eqn to be continuous for all Fcent values and other cases commit ab97a6def6af9a57731dcf88c5fefc88d175d1ce Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 22:52:25 2021 -0500 Moving ipopt and bonmin commit 3accc3b72cc0e69b5944dda58695470e87013277 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 22:41:46 2021 -0500 Including bonmin and ipopt for rbfopt commit 3289e0ddd4a18f60d3c5dc7be21879e0cd206fc5 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 21:12:21 2021 -0500 Working on Pygmo and RBFOpt commit dc91f303cffb52aacf36acf123bfcaee4e0ddd15 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 15 21:55:11 2021 -0500 Troe updates Implemented genetic algorithms into GUI Sped GA's up through Numba Sped GA's up by only using DIRECT_L instead of CRS2. This is less accurate but much faster commit 91cbd0f243c0d31751ce37c868cc2d93d7ce9e8a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 15 01:08:16 2021 -0500 Implementing genetic algorithms commit 30ad17b43445e0b0d2f4dd6d616eb7a53487370b Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 12 08:56:46 2021 -0500 Minimally working Troe commit 82ee3a198fd4d153f7877887aa710af4b7617714 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 23:57:46 2021 -0500 Update fit_coeffs.py Minor bug catch commit 0d89b688dad5c8289de4e803c9664dc358f6eccb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 23:26:30 2021 -0500 Maybe working? commit 5e69b6974732ce2ed7262436da72ee026dfb8c7f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 21:44:16 2021 -0500 Update fit_coeffs.py Almost ready to test, but switching to optimizing arrhenius parameters instead of lpl, hpl rates commit c3a5098c2c405c8a62a291302014116ceb595e83 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 6 09:54:08 2021 -0500 Tinkering with ranges and bounds commit 44d7ddb38b1706c54f68cd829866e9f83df8173c Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 3 14:55:30 2021 -0500 Troe opt changes Troe opt changes. fixed bug from updating dependencies commit 3735e38c33333e09b81d9950e5ccb452dfec82db Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 1 20:44:04 2021 -0500 Redoing Troe Fit Redoing fit. Need to do more work on constraints of fitting LPL, HPL, Fcent commit 64070580655a1df81ee7c6cd7f1d38aa7d0a49ff Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jul 29 10:20:16 2021 -0500 Troe Fitting - Nonfunctional Redoing Troe Fitting to be similar to PLOG -> Troe. commit d5731fc23e48ad7f74b468ae9637c68abb71b725 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jul 6 15:28:43 2021 -0500 Resample Nonuniform Data Resamples nonuniform data for uncertainty shading smoothing commit 4e113be44a96730586f92ee1b5b3f17ad3cae9f0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jul 5 18:52:25 2021 -0500 Update options_panel_widgets.py Enable/Disable wavelet levels input box accordingly commit cf67025f6228c6b8e5945401028aac9db2d6ee39 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jul 5 18:42:40 2021 -0500 Implemented unc shading over data commit 2fa08582d44fbc97fd01e2657664cd0eeddcdc1a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Jul 2 13:59:59 2021 -0500 Minor update Added error checking for 0D reactors to prevent crashing commit 5abded10b0b27beac16f61728fa375fe792e0ce8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Jun 30 20:25:17 2021 -0500 Bug Fix Fixed some convergence issues related to outlier determination commit 9e268a8ac9594a7d8dae86a089365f8ee5453618 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 29 21:29:42 2021 -0500 Working Troe Optimization Refactored Troe fitting into more legible classes. Enabled multiprocessing Beginning Testing commit 951efd3b251ea1d40abc1dc3aecc542620b977e8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Jun 26 23:03:43 2021 -0500 Working Basic optimization is working commit e672ca027fd5303045f8efeda6858d81207803a7 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Jun 26 00:12:41 2021 -0500 Sort of working Have nlopt working for fcent commit 2dbb375cb1a3ac4c5ec989635400fbf2fc8afa96 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 24 17:19:18 2021 -0500 Update fit_coeffs.py Added constraints, but not fitting well with nlopt commit d9302247552b820f03303a0632ad7bbb1f2ae5a4 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 24 10:33:31 2021 -0500 Semi working Troe is working, but need to implement constraints on Fcent fitting. Should be working for Plog -> Troe after implementing those constraints commit afdcdbc5955798bf68f900ffe432fe2825c8e131 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 22 15:26:13 2021 -0500 Reproducing Troe Ok commit 9d080529713b8ac53c519379b79804c0e8bb3ba6 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jun 21 22:49:49 2021 -0500 Update fit_coeffs.py commit 50265c672547ef8d611abca3654020efc08b048d Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jun 21 14:27:55 2021 -0500 Tinkering commit ee6501edf5675db221420e98b15f0aa3242e1dd2 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Jun 20 21:05:57 2021 -0500 New Plog Fit Method commit 3f5619e669e0485c5b06861fb82797bb754aaf82 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 17 13:39:16 2021 -0500 Nothing works commit 568ea11ecd5be87afa3c573305fecae487c4b3af Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 8 17:27:22 2021 -0500 PLOG working? commit e9d957f823be4b4129c56ccbc11db135a0b20a85 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon May 3 22:26:22 2021 -0500 Works for PLOG but badly fitting commit 9fb5d64a3e22beec5b54e5f8b42a2bacad288b1f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon May 3 22:06:08 2021 -0500 PLOG Residual Working commit 292d68fb38e29211b22716a567025f7e7a9657c0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Apr 11 22:13:03 2021 -0500 Troe Progress commit 6867bf67882ed4a45ce652debc49910536d4122d Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Apr 7 18:22:37 2021 -0500 Update fit_coeffs.py commit 36b155d038fddf8bb32da3fefc35e8840134810a Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Apr 7 15:07:54 2021 -0500 More Progress commit e5dec8fae959c28662a911902f301c206f8e6366 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Apr 6 22:12:52 2021 -0500 Making Progress commit 0c5052034219462ab6b8bb83a5217dc3332bacdb Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Apr 6 15:01:09 2021 -0500 Tinkering commit 47bc21dcbc99288242ca3b2ffdddd541e668024a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Apr 5 18:34:20 2021 -0500 Working commit add6d9dd5681a0c390958a2b85e2b49b11d39570 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Apr 5 13:48:45 2021 -0500 Big Changes Refactored calculation-type functions Set mechanism now generates mechanism programmatically rather than from yaml text in memory. This is necessary to be able to switch reaction types commit 70cab313f211c86434cb79b710f90003b63b51a2 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 31 16:56:05 2021 -0500 set_mechanism changes Working on changing reaction types. Side benefit will be faster initialization during optimization commit f6822d93a11368ced34ea47543ccf43f74fb6422 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 31 13:34:46 2021 -0500 Update misc_fcns.py commit cef069ba46c9575a21dd39506aa9540a40bc3892 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 29 14:49:16 2021 -0500 Update shock_fcns.py commit b3e91ea58b024aa40807cc08d024f33415f10f2f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 29 00:00:18 2021 -0500 Update shock_fcns.py commit b8744a1ad214bb90bb619a08a289e9f4cc833e10 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Mar 28 23:43:34 2021 -0500 Update shock_fcns.py Updated shock solver to match paper commit 198eabf72084b04569be942e3d4d88521eba6c21 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 22 00:03:21 2021 -0500 Troe Falloff functional Fitting the falloff parameters is working. Need to think more about initial parameters and see if I can fit LPL and HPL at the same time(this makes PLOGS work) commit 31fb75facfc140adbeaa2fe8ddcbdfaa2dc4e48e Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Mar 11 16:29:19 2021 -0600 Bug Fixes commit 9bbdb0689855d38af559d625258782d2a1cff4e1 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 23:24:09 2021 -0600 Update mech_fcns.py Accidentally removed sleep for mech changing. This must remain until incident shock reactor is rewritten commit 4a6efa520b4048d2fc22ca6733d6da97ee279de1 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 23:16:01 2021 -0600 Update mech_optimize.py Automatically set minimum time between plots when optimizing commit c177ed6e58ff468a0bc2e75cd532ce8e8d245601 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 22:46:13 2021 -0600 Plotting Improved Set plots to only draw if they are being shown Set minimum time since last draw for optimization End result: Much faster plotting and program does not appear to hang like before commit 397457dbf477f76da4c967f2fa5b9f23a2cc7a61 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Mar 9 00:16:23 2021 -0600 Calculated Troe Derivatives commit 43a47e944335b0b9316afa63f6f208dd044a4611 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Mar 6 13:47:07 2021 -0600 Bug Fixes commit f0899df6ca3b020e073b843a3eff498f700a6a4e Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 28 20:27:18 2021 -0600 Fixed Branch Errors commit 687737fe016dd65faacf9d96bbc26df80bfc5121 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 28 18:28:40 2021 -0600 Troe Tinkering * Bug Fixing Fixing issues with optimization * Squashed commit of the following: commit 7c78008561899297477be22b2d6789d8d9be2619 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jan 17 14:51:38 2022 -0600 Update main.py commit 9aee5641e96c5c94c7cf3398e1c8f6c6c113c738 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 13:02:15 2021 -0600 Version check bug fix commit 703d2253328b5f1e59a85e04cb57552a1d0a8c53 Merge: 0634337 7b12b8e Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 12:55:21 2021 -0600 Merge branch 'Troe_opt' of https://github.com/tsikes/Frhodo into Troe_opt commit 063433798bdf084fd79bb37c79b8783ace8375d3 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 12:55:18 2021 -0600 Moving Loss Partition Function TCK commit 7b12b8eceaa5f742e3453bd75d8caa3e69b8c703 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 11:47:03 2021 -0600 Working on Bug Fix New bug with RBFopt when using as executable. Working on fixing. (command window is flashing each iteration) commit 681051d2afc94aeba90101e686bab51d199d5d32 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Thu Nov 4 16:20:59 2021 -0500 Bug Fix Fixed CheKiPEUQ interface issues commit 56645923f1229a17d3f6ac50b4cdce3ebbb91580 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Nov 4 11:01:42 2021 -0500 Bug Fix Fixed bugs with secondary y axis in sim explorer. commit 78215871dec25a997e24ab2f4055356f78d6a3f7 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Wed Nov 3 19:46:04 2021 -0500 Update fit_fcn.py commit f9d11e87d980b7a428c61a5863ae02d5e84d98d9 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Wed Nov 3 18:09:27 2021 -0500 Optimization Bug Fixing Fixed setting rate parametrization constant uncertainties to zero for residual based method. commit 7bd4d647cb0e54afa40e3703bd490162f866e189 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Nov 2 15:52:28 2021 -0500 Minor changes Consolidated bisymlog scaling factor Added new % abs density gradient to options commit cf6328b0e9038f68a3a34f51a2d628d95dcd2912 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 1 16:22:59 2021 -0500 Bug Fixing Fixing bugs in explorer widget/base plot with limit setting and widget type not defined commit 15b2acb593cdd15b91c80744ee109665499b5c7f Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Oct 4 21:56:17 2021 -0500 Minor Update Fixed Torr added bisymlog to opt type Fixed plot error where left limit == right limit commit 943ad8e2950b9c0267a5af2176ce846a243b8aca Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Sep 23 16:36:40 2021 -0500 Update for New Tranter Exp Format New format does not have tOpt/PT spacing but instead gives velocity commit 0eb48ce2e16c69cf8387aadeb94f5a0af17bcc68 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Sun Sep 19 22:55:06 2021 -0500 Opt Update Moved bisymlog out of multiple locations into convert_units Changed calculate residuals log to bisymlog commit 5268062afa75b854fe0eb5e3e25cebd0e46b6b58 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Sep 16 15:35:24 2021 -0500 Changed Fit Function Changed fit function to mean instead of median Changed so that Bayesian doesn't include penalty function at end. Need to check commit 3df584d3ea1d0896d96aa4bcda3dc8072f4fd1c0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 22:15:05 2021 -0500 Update fit_fcn.py commit a507f938afdf23a5210739eab9eccd3a33b02dee Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 19:30:38 2021 -0500 Changed adaptive loss function Adaptive loss function now optimizes inside of rate optimization loop. It's much more efficient. It also means each individual experiment has it's own loss alpha commit bc66027fd2576fbfc3c403be1b6b3cd404d3d842 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 16:58:18 2021 -0500 Fixed usage of C in loss function/GUI commit 42d82ff555bb864bb430a756691a41597473fe9a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 18:26:13 2021 -0500 Modified loss function Removed prior scaling from loss function to bring it back to publication formula commit d0d43dc453e170d8f7e6b6d57ff63877f17aa27f Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 16:57:00 2021 -0500 Tinkering with generalized loss func commit 4702ef3d70e4aacedae1ce39b738bc88c9756acb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 15:51:28 2021 -0500 Changed to adaptive loss function The shape value is now broken into an optimized parameter for inside experiments and between experiments commit 60e782f43ddc6f473a3b7773167bdb46aaa950e2 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 23:54:22 2021 -0500 Update loss_integral.py commit d093dda5fe90a34d197a9842dcf2a63c8e8b7433 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 22:45:04 2021 -0500 loss integral fitting commit 774f70d4ae07539a1153fda94e827beb2e8b1199 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 17:41:35 2021 -0500 loss function changes commit cba7b9e5638efa58970aa93506218514e170c410 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 13:30:17 2021 -0500 Experiment import update Updated experimental conditions import to work for old Tranter style experiment files commit 016ea7ca63c4af2155d1964c1d01d982dfc308bb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 20 16:31:40 2021 -0500 Create fit_coeffs_pygmo.py commit b95d5a69e6c8784f2182f9d041dd55dd7aa24786 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 20 16:29:08 2021 -0500 Rollback Rolling back to working CRS2 Troe opt commit 3c577e96ed87fb79a47555acf06d3ebdb65ab541 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 19 14:59:26 2021 -0500 Testing Troe Opt commit 553887b648b4f9e6ff910abdbeea81ddfe44cfbb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 19 01:36:54 2021 -0500 Update fit_coeffs.py commit cbfaa20b587b940dc3620f599a3facb7dd1990f7 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Aug 18 16:28:28 2021 -0500 Working on implementing Augmented Lagrangian commit 520f822e5f61808140d047a8b9f5049ae9e667cc Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Aug 18 12:44:48 2021 -0500 Small changes bonmin path wasn't working with network path as string reduced min_T_range changing Troe eqn to be continuous for all Fcent values and other cases commit ab97a6def6af9a57731dcf88c5fefc88d175d1ce Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 22:52:25 2021 -0500 Moving ipopt and bonmin commit 3accc3b72cc0e69b5944dda58695470e87013277 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 22:41:46 2021 -0500 Including bonmin and ipopt for rbfopt commit 3289e0ddd4a18f60d3c5dc7be21879e0cd206fc5 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 21:12:21 2021 -0500 Working on Pygmo and RBFOpt commit dc91f303cffb52aacf36acf123bfcaee4e0ddd15 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 15 21:55:11 2021 -0500 Troe updates Implemented genetic algorithms into GUI Sped GA's up through Numba Sped GA's up by only using DIRECT_L instead of CRS2. This is less accurate but much faster commit 91cbd0f243c0d31751ce37c868cc2d93d7ce9e8a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 15 01:08:16 2021 -0500 Implementing genetic algorithms commit 30ad17b43445e0b0d2f4dd6d616eb7a53487370b Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 12 08:56:46 2021 -0500 Minimally working Troe commit 82ee3a198fd4d153f7877887aa710af4b7617714 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 23:57:46 2021 -0500 Update fit_coeffs.py Minor bug catch commit 0d89b688dad5c8289de4e803c9664dc358f6eccb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 23:26:30 2021 -0500 Maybe working? commit 5e69b6974732ce2ed7262436da72ee026dfb8c7f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 21:44:16 2021 -0500 Update fit_coeffs.py Almost ready to test, but switching to optimizing arrhenius parameters instead of lpl, hpl rates commit c3a5098c2c405c8a62a291302014116ceb595e83 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 6 09:54:08 2021 -0500 Tinkering with ranges and bounds commit 44d7ddb38b1706c54f68cd829866e9f83df8173c Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 3 14:55:30 2021 -0500 Troe opt changes Troe opt changes. fixed bug from updating dependencies commit 3735e38c33333e09b81d9950e5ccb452dfec82db Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 1 20:44:04 2021 -0500 Redoing Troe Fit Redoing fit. Need to do more work on constraints of fitting LPL, HPL, Fcent commit 64070580655a1df81ee7c6cd7f1d38aa7d0a49ff Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jul 29 10:20:16 2021 -0500 Troe Fitting - Nonfunctional Redoing Troe Fitting to be similar to PLOG -> Troe. commit d5731fc23e48ad7f74b468ae9637c68abb71b725 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jul 6 15:28:43 2021 -0500 Resample Nonuniform Data Resamples nonuniform data for uncertainty shading smoothing commit 4e113be44a96730586f92ee1b5b3f17ad3cae9f0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jul 5 18:52:25 2021 -0500 Update options_panel_widgets.py Enable/Disable wavelet levels input box accordingly commit cf67025f6228c6b8e5945401028aac9db2d6ee39 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jul 5 18:42:40 2021 -0500 Implemented unc shading over data commit 2fa08582d44fbc97fd01e2657664cd0eeddcdc1a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Jul 2 13:59:59 2021 -0500 Minor update Added error checking for 0D reactors to prevent crashing commit 5abded10b0b27beac16f61728fa375fe792e0ce8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Jun 30 20:25:17 2021 -0500 Bug Fix Fixed some convergence issues related to outlier determination commit 9e268a8ac9594a7d8dae86a089365f8ee5453618 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 29 21:29:42 2021 -0500 Working Troe Optimization Refactored Troe fitting into more legible classes. Enabled multiprocessing Beginning Testing commit 951efd3b251ea1d40abc1dc3aecc542620b977e8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Jun 26 23:03:43 2021 -0500 Working Basic optimization is working commit e672ca027fd5303045f8efeda6858d81207803a7 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Jun 26 00:12:41 2021 -0500 Sort of working Have nlopt working for fcent commit 2dbb375cb1a3ac4c5ec989635400fbf2fc8afa96 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 24 17:19:18 2021 -0500 Update fit_coeffs.py Added constraints, but not fitting well with nlopt commit d9302247552b820f03303a0632ad7bbb1f2ae5a4 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 24 10:33:31 2021 -0500 Semi working Troe is working, but need to implement constraints on Fcent fitting. Should be working for Plog -> Troe after implementing those constraints commit afdcdbc5955798bf68f900ffe432fe2825c8e131 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 22 15:26:13 2021 -0500 Reproducing Troe Ok commit 9d080529713b8ac53c519379b79804c0e8bb3ba6 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jun 21 22:49:49 2021 -0500 Update fit_coeffs.py commit 50265c672547ef8d611abca3654020efc08b048d Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jun 21 14:27:55 2021 -0500 Tinkering commit ee6501edf5675db221420e98b15f0aa3242e1dd2 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Jun 20 21:05:57 2021 -0500 New Plog Fit Method commit 3f5619e669e0485c5b06861fb82797bb754aaf82 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 17 13:39:16 2021 -0500 Nothing works commit 568ea11ecd5be87afa3c573305fecae487c4b3af Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 8 17:27:22 2021 -0500 PLOG working? commit e9d957f823be4b4129c56ccbc11db135a0b20a85 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon May 3 22:26:22 2021 -0500 Works for PLOG but badly fitting commit 9fb5d64a3e22beec5b54e5f8b42a2bacad288b1f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon May 3 22:06:08 2021 -0500 PLOG Residual Working commit 292d68fb38e29211b22716a567025f7e7a9657c0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Apr 11 22:13:03 2021 -0500 Troe Progress commit 6867bf67882ed4a45ce652debc49910536d4122d Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Apr 7 18:22:37 2021 -0500 Update fit_coeffs.py commit 36b155d038fddf8bb32da3fefc35e8840134810a Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Apr 7 15:07:54 2021 -0500 More Progress commit e5dec8fae959c28662a911902f301c206f8e6366 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Apr 6 22:12:52 2021 -0500 Making Progress commit 0c5052034219462ab6b8bb83a5217dc3332bacdb Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Apr 6 15:01:09 2021 -0500 Tinkering commit 47bc21dcbc99288242ca3b2ffdddd541e668024a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Apr 5 18:34:20 2021 -0500 Working commit add6d9dd5681a0c390958a2b85e2b49b11d39570 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Apr 5 13:48:45 2021 -0500 Big Changes Refactored calculation-type functions Set mechanism now generates mechanism programmatically rather than from yaml text in memory. This is necessary to be able to switch reaction types commit 70cab313f211c86434cb79b710f90003b63b51a2 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 31 16:56:05 2021 -0500 set_mechanism changes Working on changing reaction types. Side benefit will be faster initialization during optimization commit f6822d93a11368ced34ea47543ccf43f74fb6422 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 31 13:34:46 2021 -0500 Update misc_fcns.py commit cef069ba46c9575a21dd39506aa9540a40bc3892 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 29 14:49:16 2021 -0500 Update shock_fcns.py commit b3e91ea58b024aa40807cc08d024f33415f10f2f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 29 00:00:18 2021 -0500 Update shock_fcns.py commit b8744a1ad214bb90bb619a08a289e9f4cc833e10 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Mar 28 23:43:34 2021 -0500 Update shock_fcns.py Updated shock solver to match paper commit 198eabf72084b04569be942e3d4d88521eba6c21 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 22 00:03:21 2021 -0500 Troe Falloff functional Fitting the falloff parameters is working. Need to think more about initial parameters and see if I can fit LPL and HPL at the same time(this makes PLOGS work) commit 31fb75facfc140adbeaa2fe8ddcbdfaa2dc4e48e Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Mar 11 16:29:19 2021 -0600 Bug Fixes commit 9bbdb0689855d38af559d625258782d2a1cff4e1 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 23:24:09 2021 -0600 Update mech_fcns.py Accidentally removed sleep for mech changing. This must remain until incident shock reactor is rewritten commit 4a6efa520b4048d2fc22ca6733d6da97ee279de1 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 23:16:01 2021 -0600 Update mech_optimize.py Automatically set minimum time between plots when optimizing commit c177ed6e58ff468a0bc2e75cd532ce8e8d245601 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 22:46:13 2021 -0600 Plotting Improved Set plots to only draw if they are being shown Set minimum time since last draw for optimization End result: Much faster plotting and program does not appear to hang like before commit 397457dbf477f76da4c967f2fa5b9f23a2cc7a61 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Mar 9 00:16:23 2021 -0600 Calculated Troe Derivatives commit 43a47e944335b0b9316afa63f6f208dd044a4611 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Mar 6 13:47:07 2021 -0600 Bug Fixes commit f0899df6ca3b020e073b843a3eff498f700a6a4e Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 28 20:27:18 2021 -0600 Fixed Branch Errors commit 687737fe016dd65faacf9d96bbc26df80bfc5121 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 28 18:28:40 2021 -0600 Troe Tinkering commit de2f6d6c72afa2825a718dde98794ee93c3c61c8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 24 21:06:09 2021 -0600 SRI progress commit 55c92eaabb86acc4efad28c89e7f8872e6946413 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 24 17:07:25 2021 -0600 Update base_plot.py commit a99ae6949523491a828c091c95b7a6d43381a4c0 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 24 12:11:30 2021 -0600 Update base_plot.py commit edbc062de10439f61bb82980e5a21e6963811feb Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 23 17:24:00 2021 -0600 Not going well commit 738a410f596821b8539a911c963ff33bc3312750 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Feb 22 22:08:43 2021 -0600 New SRI fit Fitting a,b then all commit e45563872533d08db378c581b2d5bf58b87bd285 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 21 21:02:36 2021 -0600 SRI - not much progress commit 5889aec780ca4e1cbce370d6943874f6277d9e69 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 21 01:05:46 2021 -0600 More SRI fun commit fb0380db74bcc20d98712b15386c9675a55848ee Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 21 00:27:41 2021 -0600 Working on SRI commit 665ac25ef0fd65db042893e82585c8086014fbb3 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Feb 18 18:07:54 2021 -0600 Working on nlopt SRI commit a2593ea4e7e2fe31f0c56c24e223c72182d59cef Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 17 17:49:32 2021 -0600 More SRI Fitting commit f901332049c809c0b2415ca9023d6e31d88c9789 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 16 22:36:23 2021 -0600 SRI fitting commit f3d33a6f4cde8c68c1b3292fdf79363b35e4bc79 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 16 17:45:57 2021 -0600 SRI fitting progress commit 3b46b21395a556b2965233961ac080aab8189ae8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 16 10:00:37 2021 -0600 Working on Falloff Fits commit 5dab2bff3623e5bedd115384ee0e9c7ec61ff76c Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Feb 15 22:38:09 2021 -0600 Working on falloff Need to change mech to SRI Need to save mech as yaml for reverting back to later commit b67d2b1dd4cfe59943680522e005d8c1579060ab Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 14 22:11:16 2021 -0600 Working on SRI Fit commit 0aac366aada55b2f552192605e285802bcc6b805 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 14 15:47:26 2021 -0600 Troe kinda works Troe kinda works but it's very slow to fit the coefficients. I'm going to switch to SRI and hope for a faster convergence time to fitting the coefficients commit e40d5860ca5495f677aeb4c64117b52a49355508 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Feb 13 00:33:33 2021 -0600 Troe kind of working commit bbbed46a06bf6770a7cf73e8abe2880a8f731647 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Thu Feb 11 16:12:14 2021 -0600 Arrhenius Optimization working again Arrhenius working Work to be done with Falloff commit 308b5b37bbe290275fe6334411131090d1f8a96d Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 10 17:03:38 2021 -0600 Backend mech work Changing arrhenius coeffs/coef_bnds to be an item in a list to better match plog and falloff rates commit f1b2a0c0b3d70546bd361982e0af7bd41bf1274f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 9 22:05:26 2021 -0600 More back end work on falloff/plogs commit c1c41c3b8698f6bbc1e0237765e3db84e341f465 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 9 17:20:14 2021 -0600 Bug fix and mech.reset work Fixed mechanism double loading from setting use_thermo_file_box programmatically mech.reset now includes Plog and Falloff properly. commit 3b89e99654216a6dcb5eaac8a62d284c7fbe406f Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 9 16:29:06 2021 -0600 Backend mech structures created Arrhenius optimization functioning again commit a9c0af4e5e5a064af03a9a08461b090c88ec7db5 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Feb 8 21:18:39 2021 -0600 Nothing Changing * Updating Frhodo to be compatible with Cantera 3.0. Adding updated adaptive_weights Changing to Cantera chemkin output * Arrhenius mostly fixed * pressure dependent opt fix * minor bugfix: selecting experimental dir * Environment update * widget fix and drhodz_per_rxn fix * Mech reading update * Chebyshev bug fix * Squashed commit of the following: commit 5e39ec2191bfbed08ce7d2877f2fb7f1d7da24bd Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Sun Oct 16 16:28:13 2022 -0500 Update package dependencies commit b4854765ecc4afdd3486599b8c57565f6cd23ada Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Sun Oct 16 16:20:02 2022 -0500 Update options_panel_widgets.py Minor bug fix. Newer Qt expects int here commit 2cfe699b7c8defff89ba872dc2d9c87fa4f647ff Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 21 01:05:26 2022 -0500 Version Bump commit 97d59d0d67aeba280763fea7d282e306a2fa6751 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 21 01:04:05 2022 -0500 Bug Fix * Soln2ck Fix Fixed bug in writing thermo if no note exists (like writing from a 'gri30.cti') * Heat Release Rate Added heat release rate as both an observable and in the sim explorer * Example Directory Changed and FirstTimeUse.docx made. * Update FirstTimeUse.docx * removing opt mech files * Update fit_fcn.py Replace all for following: change calculate_residuals to calculate_objective_function change calc_resid_output to calc_objective_function_output * Update fit_fcn.py * Update fit_fcn.py changing variable names in fit_fcn and separating the bayesian case into an if statement. * Update fit_fcn.py * Update fit_fcn.py * Update fit_fcn.py Making single return for verbose versus normal case. * Made CheKiPEUQ_from_Frhodo helper module and also put code into fit_fcn Right now, these codes are not working. It is just the beginning. * Update fit_fcn.py * Create ExampleConfig.ini * add comments to fit_fcn.py about what is fed in by "args_list" * Update fit_fcn.py * renaming obs to obs_sim in fit_fcn * Update fit_fcn.py * get_varying_rate_vals_and_bnds * get_varying_rate_vals_and_bnds * pars_uncertainty_distribution code * newOptimization field for creating PE_object. * Added "Force Bayesian" * minor changes * Update CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Adjusting observed_data and responses shapes for CheKiPEUQ * got the weightings multiplication to work after transposing and transposing back * typo correction * update get_last_obs_sim_interp * moved CheKiPEUQ_PE_object creation into time_adjust_func * Update fit_fcn.py * working on get_log_posterior_density and getting varying_rate_vals * Update fit_fcn.py * forcing 'final' after minimization to be 'residual' * switch to negative log P for objective_function_value * Trying to allow Bayesian way to get past the "QQ" error by feeding residual metrics. * trying 10**neg_logP in lieu of loss_scalar. * Setting forceBayesian * adding Bayesian_dict to make code easier to follow * Separating Bayesian into 5 steps to simplify things for Travis * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Create CiteSoftLocal.py * Update CiteSoft call for CheKiPEUQ * Disabling CiteSoft exportations for now * Update CheKiPEUQ_integration_notes.txt * Variable name changes (#2) * Variable name changes Moved image assets and added a GUI screenshot Replace all for following: change calculate_residuals to calculate_objective_function change calc_resid_output to calc_objective_function_output changing variable names in fit_fcn and separating the bayesian case into an if statement. Making single return for verbose versus normal case. Co-authored-by: Aditya Savara <39929571+AdityaSavara@users.noreply.github.com> * CheKiPEUQ_integration Merging from Ashi's branch Modified GUI to include variables needed for Bayesian optimization * Merge CheKiPEUQ local (#5) changing variable names in fit_fcn and separating the bayesian case into an if statement. Making single return for verbose versus normal case. Made CheKiPEUQ_from_Frhodo helper module and also put code into fit_fcn - Right now, these codes are not working. It is just the beginning. Create ExampleConfig.ini add comments to fit_fcn.py about what is fed in by "args_list" renaming obs to obs_sim in fit_fcn * Update fit_fcn.py * get_varying_rate_vals_and_bnds * get_varying_rate_vals_and_bnds * pars_uncertainty_distribution code * newOptimization field for creating PE_object. * Added "Force Bayesian" * minor changes * Update CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Adjusting observed_data and responses shapes for CheKiPEUQ * got the weightings multiplication to work after transposing and transposing back * typo correction * update get_last_obs_sim_interp * moved CheKiPEUQ_PE_object creation into time_adjust_func * Update fit_fcn.py * working on get_log_posterior_density and getting varying_rate_vals * Update fit_fcn.py * forcing 'final' after minimization to be 'residual' * switch to negative log P for objective_function_value * Trying to allow Bayesian way to get past the "QQ" error by feeding residual metrics. * trying 10**neg_logP in lieu of loss_scalar. * adding Bayesian_dict to make code easier to follow * Separating Bayesian into 5 steps to simplify things for Travis * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Create CiteSoftLocal.py * Update CiteSoft call for CheKiPEUQ * Disabling CiteSoft exportations for now * Update CheKiPEUQ_integration_notes.txt * adding CheKiPEUQ local * CheKiPEUQ_local * removing things not needed from CheKiPEUQ * try moving CheKiPEUQ_local * parmaeter_estimation class still not being found for CheKiPEUQ_local Specifically from inside CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Trying to use only CheKiPEUQ_local * Merge Fix Co-authored-by: Aditya Savara <39929571+AdityaSavara@users.noreply.github.com> * Another Merge Fix * Forced Bayesian Removed Distribution type and how many sigma the unc represents is now reachable in var * update comment in fit_fcn.py * manually updating fit_fcn.py from Travis's CheKiPEUQ_integration branch Github was not allowing the merge properly, so doing this change manually before merging. * Update CheKiPEUQ_from_Frhodo from the CheKiPEUQ_Integration branch * GUI Update Bayesian tab now hidden when residuals are checked. Uncertainty/Weights table shows depending upon selection of Residual/Bayesian * Bayesian Changes Changes to Optimization tab GUI Reverting some changes in fit_fcn now that CheKiPEUQ will be called after simulations. Prior naming was more correct * Created CheKiPEUQ Dictionary * Minor Changes * Update fit_fcn.py * Update fit_fcn.py * creating get_last_obs_sim_interp and also working on Bayesian_dict * Update fit_fcn.py * Update fit_fcn.py * Adding in more Bayesian_dict arguments. (#6) * Update README.rst Moved image assets and added a GUI screenshot * Update README.rst * Moved to Assets Branch * Update fit_fcn.py Replace all for following: change calculate_residuals to calculate_objective_function change calc_resid_output to calc_objective_function_output * Update fit_fcn.py * Update fit_fcn.py changing variable names in fit_fcn and separating the bayesian case into an if statement. * Update fit_fcn.py * Update fit_fcn.py * Update fit_fcn.py Making single return for verbose versus normal case. * Made CheKiPEUQ_from_Frhodo helper module and also put code into fit_fcn Right now, these codes are not working. It is just the beginning. * Update fit_fcn.py * Create ExampleConfig.ini * add comments to fit_fcn.py about what is fed in by "args_list" * Update fit_fcn.py * renaming obs to obs_sim in fit_fcn * Update fit_fcn.py * get_varying_rate_vals_and_bnds * get_varying_rate_vals_and_bnds * pars_uncertainty_distribution code * newOptimization field for creating PE_object. * Added "Force Bayesian" * minor changes * Update CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Adjusting observed_data and responses shapes for CheKiPEUQ * got the weightings multiplication to work after transposing and transposing back * typo correction * update get_last_obs_sim_interp * moved CheKiPEUQ_PE_object creation into time_adjust_func * Update fit_fcn.py * working on get_log_posterior_density and getting varying_rate_vals * Update fit_fcn.py * forcing 'final' after minimization to be 'residual' * switch to negative log P for objective_function_value * Trying to allow Bayesian way to get past the "QQ" error by feeding residual metrics. * trying 10**neg_logP in lieu of loss_scalar. * adding Bayesian_dict to make code easier to follow * Separating Bayesian into 5 steps to simplify things for Travis * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Create CiteSoftLocal.py * Update CiteSoft call for CheKiPEUQ * Disabling CiteSoft exportations for now * Update CheKiPEUQ_integration_notes.txt * update comment in fit_fcn.py * manually updating fit_fcn.py from Travis's CheKiPEUQ_integration branch Github was not allowing the merge properly, so doing this change manually before merging. * Update CheKiPEUQ_from_Frhodo from the CheKiPEUQ_Integration branch * Update fit_fcn.py * Update fit_fcn.py * creating get_last_obs_sim_interp and also working on Bayesian_dict * Update fit_fcn.py * Update fit_fcn.py Co-authored-by: Travis Sikes <50559900+tsikes@users.noreply.github.com> * Update fit_fcn.py * Save coefficient x0 and bnds in optimization Also added Automatic as an option for uncertainty distribution * Added a couple of missed values * Update fit_fcn.py Manually making them match * making get_consolidated_parameters_arrays * Bayesian_dict parsing of initial guesses and bounds added * for params, deepcopy was needed, and added print lines. * Adding new opt settings Also spent a long time fixing the scientific spinbox * Hall of Fame Added * Che ki peuq integration v3 (#7) * making get_consolidated_parameters_arrays * Bayesian_dict parsing of initial guesses and bounds added * for params, deepcopy was needed, and added print lines. Co-authored-by: Travis Sikes <50559900+tsikes@users.noreply.github.com> * Fixed bounds provided to CheKiPEUQ * CheKiPEUQ Fixes * Fixing the ligical error for lower bound. I fixed the logical error, but line 212 is now printing this: line 212 [True, True, True] [0, 2.4424906541753446e-16, -1.7976931348623155e+288] -1.7976931348623155e+288 That means that the comparison is not working correctly for min_neg_system value. In the next commit, I'm going to use the 1E99 way of doing things. * Changed to -1E99 and +1E99 check SOMEWHAT SURPRISINGLY, THE COMPARISON IS STILL FAILING. -1E288 > -1E99 is returning True. * fixing comparisons: There was actually an "abs" I had not noticed. * extra space deleted * Cleaning up Boxes in uncertainty function now functional Also have further modified scientificspinbox * cleanup & working towards inclusion of pars_bnds_exist * rate_constants_parameters_bnds seem to be parsed and passed correctly to CheKiPEUQ * Adding in unbounded_indices and return_unbounded_indices code * minor syntax fixes * Added remove_unbounded_values calls to code to truncate arrays etc. * Moved more of the Bayesian_Dict fields population to init * Cleaning up print statements * Fixing rate_constants_parameters_bnds_exist for multiple rate constants * Update CheKiPEUQ_integration_notes.txt * Che ki peuq integration v3 (#8) * making get_consolidated_parameters_arrays * Bayesian_dict parsing of initial guesses and bounds added * for params, deepcopy was needed, and added print lines. * Fixing the ligical error for lower bound. I fixed the logical error, but line 212 is now printing this: line 212 [True, True, True] [0, 2.4424906541753446e-16, -1.7976931348623155e+288] -1.7976931348623155e+288 That means that the comparison is not working correctly for min_neg_system value. In the next commit, I'm going to use the 1E99 way of doing things. * Changed to -1E99 and +1E99 check SOMEWHAT SURPRISINGLY, THE COMPARISON IS STILL FAILING. -1E288 > -1E99 is returning True. * fixing comparisons: There was actually an "abs" I had not noticed. * extra space deleted * cleanup & working towards inclusion of pars_bnds_exist * rate_constants_parameters_bnds seem to be parsed and passed correctly to CheKiPEUQ * Adding in unbounded_indices and return_unbounded_indices code * minor syntax fixes * Added remove_unbounded_values calls to code to truncate arrays etc. * Moved more of the Bayesian_Dict fields population to init * Cleaning up print statements * Fixing rate_constants_parameters_bnds_exist for multiple rate constants * Update CheKiPEUQ_integration_notes.txt Co-authored-by: Travis Sikes <50559900+tsikes@users.noreply.github.com> * Residual Optimization Update Needed to account for exp residuals not having a zero average Changed to using medians instead of means * More Loss Function Changes * Oops * Bayesian Obj Func Change Changed from -1*posterior to -1/posterior * Bounds Error Fix * Update Update default config parameters and fix bayesian obj_fcn back to -1* * Trying new fonts for log * ChekiPEUQ: First attempt to implement variance per response for too long * Linear scaling working now, but some 'iterations' of Frhodo are freezing during CheKiPEUQ's PE_object init. I am trying to track down the reason. It is somewhere in the responses_observed_uncertainties manipulation during CheKiPEUQ's init. * Reduced the slowdown to the extent of not freezing I found that the slowdown was (surprisingly) caused in the loop that tried to convert non-zero weightings into tiny finite weightings. Changing the code to use machine epsilon in one of the steps rather than minValue/1E6 made that loop faster (completely sure why). The shape of the weightings looks odd. In the next commit I will try to print out some array shapes. * Printing shows that the weighting array shape going into CheKiPEUQ is correct. * Fixed shape of weighting arrray passed to CheKiPEUQ Now need to do cleanup of print statements in next commit. * Removing excess print statements and cleaning up. * Improving heuristic slightly * Updating version number and fixing minor error from the recent edits. * Added automatic copy rates to optimize * Cleaning up the merge conflict that was dropped Removing some print statements and adding a .flatten() that is needed when analyzing multiple experiments. * Remoing print statement from CheKiPUEQ_local * Weights Work * Style Changes * Silly Math Error * CheKiPEUQ changes CheKiPEUQ was checking bounds after they had already been enforced. Differing ways of checking bounds was causing CheKiPEUQ to throw -inf. CheKiPEUQ no longer checks bounds CheKiPEUQ obj_obj function no longer shows raw -1*log_posterior_density, but is instead the relative change from the initial guess. This should not alter convergence or how it runs, but it does make it easier to see how the value is changing. * Minor fixes * Update fit_fcn.py * Nomenclature Change * Refactoring Refactoring to clean up fit_fcn Also changing imports slightly * Preliminary Signal Plot Changes * Weight Function Update Further abstracting weight function for uncertainty implementation Renamed CheKiPEUQ interface class * Implemented Uncertainty in Observable Data Implemented GUI elements and linked to CheKiPEUQ. Uncertainty is %. Need to handle uncertainties better in opt. Log bayesian is broken * Bayesian Opt Uncertainty Working Bayesian uncertainty now functioning for log scale * Added Shaded Unc Shading has a gradient, will likely remove for speed * Uncertainty Shading I prefer this version. It's quicker without much loss. Need to consider adding absolute uncertainties * Added Abs Uncertainty * Modifications to absolute uncertainty Added uncertainty type choice to saved variables Refactored OoM to convert_units * Axes update in draggable x and y axes now are animated objects that update with draggable * GUI falloff/pressure dependent uncertainty * Implementing rate coef/bnds on Plogs and falloff eqns * Troe Optimization Backend mech structures created Arrhenius optimization functioning again Bug fix and mech.reset work Fixed mechanism double loading from setting use_thermo_file_box programmatically mech.reset now includes Plog and Falloff properly. More back end work on falloff/plogs Backend mech work Changing arrhenius coeffs/coef_bnds to be an item in a list to better match plog and falloff rates Arrhenius Optimization working again Arrhenius working Work to be done with Falloff Troe kind of working Troe kinda works Troe kinda works but it's very slow to fit the coefficients. I'm going to switch to SRI and hope for a faster convergence time to fitting the coefficients Working on SRI Fit Working on falloff Need to change mech to SRI Need to save mech as yaml for reverting back to later Working on Falloff Fits SRI fitting progress SRI fitting More SRI Fitting Working on nlopt SRI Working on SRI More SRI fun SRI - not much progress New SRI fit Fitting a,b then all Not going well Update base_plot.py Update base_plot.py SRI progress Troe Optimization and Full Update Updated Environment and bugs popped up Quick fixes to address those Squashed commit of the following: commit 7c78008561899297477be22b2d6789d8d9be2619 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jan 17 14:51:38 2022 -0600 Update main.py commit 9aee5641e96c5c94c7cf3398e1c8f6c6c113c738 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 13:02:15 2021 -0600 Version check bug fix commit 703d2253328b5f1e59a85e04cb57552a1d0a8c53 Merge: 0634337 7b12b8e Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 12:55:21 2021 -0600 Merge branch 'Troe_opt' of https://github.com/tsikes/Frhodo into Troe_opt commit 063433798bdf084fd79bb37c79b8783ace8375d3 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 12:55:18 2021 -0600 Moving Loss Partition Function TCK commit 7b12b8eceaa5f742e3453bd75d8caa3e69b8c703 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 11:47:03 2021 -0600 Working on Bug Fix New bug with RBFopt when using as executable. Working on fixing. (command window is flashing each iteration) commit 681051d2afc94aeba90101e686bab51d199d5d32 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Thu Nov 4 16:20:59 2021 -0500 Bug Fix Fixed CheKiPEUQ interface issues commit 56645923f1229a17d3f6ac50b4cdce3ebbb91580 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Nov 4 11:01:42 2021 -0500 Bug Fix Fixed bugs with secondary y axis in sim explorer. commit 78215871dec25a997e24ab2f4055356f78d6a3f7 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Wed Nov 3 19:46:04 2021 -0500 Update fit_fcn.py commit f9d11e87d980b7a428c61a5863ae02d5e84d98d9 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Wed Nov 3 18:09:27 2021 -0500 Optimization Bug Fixing Fixed setting rate parametrization constant uncertainties to zero for residual based method. commit 7bd4d647cb0e54afa40e3703bd490162f866e189 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Nov 2 15:52:28 2021 -0500 Minor changes Consolidated bisymlog scaling factor Added new % abs density gradient to options commit cf6328b0e9038f68a3a34f51a2d628d95dcd2912 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 1 16:22:59 2021 -0500 Bug Fixing Fixing bugs in explorer widget/base plot with limit setting and widget type not defined commit 15b2acb593cdd15b91c80744ee109665499b5c7f Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Oct 4 21:56:17 2021 -0500 Minor Update Fixed Torr added bisymlog to opt type Fixed plot error where left limit == right limit commit 943ad8e2950b9c0267a5af2176ce846a243b8aca Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Sep 23 16:36:40 2021 -0500 Update for New Tranter Exp Format New format does not have tOpt/PT spacing but instead gives velocity commit 0eb48ce2e16c69cf8387aadeb94f5a0af17bcc68 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Sun Sep 19 22:55:06 2021 -0500 Opt Update Moved bisymlog out of multiple locations into convert_units Changed calculate residuals log to bisymlog commit 5268062afa75b854fe0eb5e3e25cebd0e46b6b58 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Sep 16 15:35:24 2021 -0500 Changed Fit Function Changed fit function to mean instead of median Changed so that Bayesian doesn't include penalty function at end. Need to check commit 3df584d3ea1d0896d96aa4bcda3dc8072f4fd1c0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 22:15:05 2021 -0500 Update fit_fcn.py commit a507f938afdf23a5210739eab9eccd3a33b02dee Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 19:30:38 2021 -0500 Changed adaptive loss function Adaptive loss function now optimizes inside of rate optimization loop. It's much more efficient. It also means each individual experiment has it's own loss alpha commit bc66027fd2576fbfc3c403be1b6b3cd404d3d842 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 16:58:18 2021 -0500 Fixed usage of C in loss function/GUI commit 42d82ff555bb864bb430a756691a41597473fe9a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 18:26:13 2021 -0500 Modified loss function Removed prior scaling from loss function to bring it back to publication formula commit d0d43dc453e170d8f7e6b6d57ff63877f17aa27f Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 16:57:00 2021 -0500 Tinkering with generalized loss func commit 4702ef3d70e4aacedae1ce39b738bc88c9756acb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 15:51:28 2021 -0500 Changed to adaptive loss function The shape value is now broken into an optimized parameter for inside experiments and between experiments commit 60e782f43ddc6f473a3b7773167bdb46aaa950e2 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 23:54:22 2021 -0500 Update loss_integral.py commit d093dda5fe90a34d197a9842dcf2a63c8e8b7433 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 22:45:04 2021 -0500 loss integral fitting commit 774f70d4ae07539a1153fda94e827beb2e8b1199 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 17:41:35 2021 -0500 loss function changes commit cba7b9e5638efa58970aa93506218514e170c410 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 13:30:17 2021 -0500 Experiment import update Updated experimental conditions import to work for old Tranter style experiment files commit 016ea7ca63c4af2155d1964c1d01d982dfc308bb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 20 16:31:40 2021 -0500 Create fit_coeffs_pygmo.py commit b95d5a69e6c8784f2182f9d041dd55dd7aa24786 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 20 16:29:08 2021 -0500 Rollback Rolling back to working CRS2 Troe opt commit 3c577e96ed87fb79a47555acf06d3ebdb65ab541 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 19 14:59:26 2021 -0500 Testing Troe Opt commit 553887b648b4f9e6ff910abdbeea81ddfe44cfbb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 19 01:36:54 2021 -0500 Update fit_coeffs.py commit cbfaa20b587b940dc3620f599a3facb7dd1990f7 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Aug 18 16:28:28 2021 -0500 Working on implementing Augmented Lagrangian commit 520f822e5f61808140d047a8b9f5049ae9e667cc Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Aug 18 12:44:48 2021 -0500 Small changes bonmin path wasn't working with network path as string reduced min_T_range changing Troe eqn to be continuous for all Fcent values and other cases commit ab97a6def6af9a57731dcf88c5fefc88d175d1ce Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 22:52:25 2021 -0500 Moving ipopt and bonmin commit 3accc3b72cc0e69b5944dda58695470e87013277 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 22:41:46 2021 -0500 Including bonmin and ipopt for rbfopt commit 3289e0ddd4a18f60d3c5dc7be21879e0cd206fc5 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 21:12:21 2021 -0500 Working on Pygmo and RBFOpt commit dc91f303cffb52aacf36acf123bfcaee4e0ddd15 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 15 21:55:11 2021 -0500 Troe updates Implemented genetic algorithms into GUI Sped GA's up through Numba Sped GA's up by only using DIRECT_L instead of CRS2. This is less accurate but much faster commit 91cbd0f243c0d31751ce37c868cc2d93d7ce9e8a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 15 01:08:16 2021 -0500 Implementing genetic algorithms commit 30ad17b43445e0b0d2f4dd6d616eb7a53487370b Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 12 08:56:46 2021 -0500 Minimally working Troe commit 82ee3a198fd4d153f7877887aa710af4b7617714 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 23:57:46 2021 -0500 Update fit_coeffs.py Minor bug catch commit 0d89b688dad5c8289de4e803c9664dc358f6eccb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 23:26:30 2021 -0500 Maybe working? commit 5e69b6974732ce2ed7262436da72ee026dfb8c7f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 21:44:16 2021 -0500 Update fit_coeffs.py Almost ready to test, but switching to optimizing arrhenius parameters instead of lpl, hpl rates commit c3a5098c2c405c8a62a291302014116ceb595e83 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 6 09:54:08 2021 -0500 Tinkering with ranges and bounds commit 44d7ddb38b1706c54f68cd829866e9f83df8173c Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 3 14:55:30 2021 -0500 Troe opt changes Troe opt changes. fixed bug from updating dependencies commit 3735e38c33333e09b81d9950e5ccb452dfec82db Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 1 20:44:04 2021 -0500 Redoing Troe Fit Redoing fit. Need to do more work on constraints of fitting LPL, HPL, Fcent commit 64070580655a1df81ee7c6cd7f1d38aa7d0a49ff Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jul 29 10:20:16 2021 -0500 Troe Fitting - Nonfunctional Redoing Troe Fitting to be similar to PLOG -> Troe. commit d5731fc23e48ad7f74b468ae9637c68abb71b725 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jul 6 15:28:43 2021 -0500 Resample Nonuniform Data Resamples nonuniform data for uncertainty shading smoothing commit 4e113be44a96730586f92ee1b5b3f17ad3cae9f0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jul 5 18:52:25 2021 -0500 Update options_panel_widgets.py Enable/Disable wavelet levels input box accordingly commit cf67025f6228c6b8e5945401028aac9db2d6ee39 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jul 5 18:42:40 2021 -0500 Implemented unc shading over data commit 2fa08582d44fbc97fd01e2657664cd0eeddcdc1a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Jul 2 13:59:59 2021 -0500 Minor update Added error checking for 0D reactors to prevent crashing commit 5abded10b0b27beac16f61728fa375fe792e0ce8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Jun 30 20:25:17 2021 -0500 Bug Fix Fixed some convergence issues related to outlier determination commit 9e268a8ac9594a7d8dae86a089365f8ee5453618 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 29 21:29:42 2021 -0500 Working Troe Optimization Refactored Troe fitting into more legible classes. Enabled multiprocessing Beginning Testing commit 951efd3b251ea1d40abc1dc3aecc542620b977e8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Jun 26 23:03:43 2021 -0500 Working Basic optimization is working commit e672ca027fd5303045f8efeda6858d81207803a7 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Jun 26 00:12:41 2021 -0500 Sort of working Have nlopt working for fcent commit 2dbb375cb1a3ac4c5ec989635400fbf2fc8afa96 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 24 17:19:18 2021 -0500 Update fit_coeffs.py Added constraints, but not fitting well with nlopt commit d9302247552b820f03303a0632ad7bbb1f2ae5a4 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 24 10:33:31 2021 -0500 Semi working Troe is working, but need to implement constraints on Fcent fitting. Should be working for Plog -> Troe after implementing those constraints commit afdcdbc5955798bf68f900ffe432fe2825c8e131 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 22 15:26:13 2021 -0500 Reproducing Troe Ok commit 9d080529713b8ac53c519379b79804c0e8bb3ba6 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jun 21 22:49:49 2021 -0500 Update fit_coeffs.py commit 50265c672547ef8d611abca3654020efc08b048d Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jun 21 14:27:55 2021 -0500 Tinkering commit ee6501edf5675db221420e98b15f0aa3242e1dd2 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Jun 20 21:05:57 2021 -0500 New Plog Fit Method commit 3f5619e669e0485c5b06861fb82797bb754aaf82 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 17 13:39:16 2021 -0500 Nothing works commit 568ea11ecd5be87afa3c573305fecae487c4b3af Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 8 17:27:22 2021 -0500 PLOG working? commit e9d957f823be4b4129c56ccbc11db135a0b20a85 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon May 3 22:26:22 2021 -0500 Works for PLOG but badly fitting commit 9fb5d64a3e22beec5b54e5f8b42a2bacad288b1f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon May 3 22:06:08 2021 -0500 PLOG Residual Working commit 292d68fb38e29211b22716a567025f7e7a9657c0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Apr 11 22:13:03 2021 -0500 Troe Progress commit 6867bf67882ed4a45ce652debc49910536d4122d Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Apr 7 18:22:37 2021 -0500 Update fit_coeffs.py commit 36b155d038fddf8bb32da3fefc35e8840134810a Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Apr 7 15:07:54 2021 -0500 More Progress commit e5dec8fae959c28662a911902f301c206f8e6366 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Apr 6 22:12:52 2021 -0500 Making Progress commit 0c5052034219462ab6b8bb83a5217dc3332bacdb Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Apr 6 15:01:09 2021 -0500 Tinkering commit 47bc21dcbc99288242ca3b2ffdddd541e668024a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Apr 5 18:34:20 2021 -0500 Working commit add6d9dd5681a0c390958a2b85e2b49b11d39570 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Apr 5 13:48:45 2021 -0500 Big Changes Refactored calculation-type functions Set mechanism now generates mechanism programmatically rather than from yaml text in memory. This is necessary to be able to switch reaction types commit 70cab313f211c86434cb79b710f90003b63b51a2 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 31 16:56:05 2021 -0500 set_mechanism changes Working on changing reaction types. Side benefit will be faster initialization during optimization commit f6822d93a11368ced34ea47543ccf43f74fb6422 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 31 13:34:46 2021 -0500 Update misc_fcns.py commit cef069ba46c9575a21dd39506aa9540a40bc3892 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 29 14:49:16 2021 -0500 Update shock_fcns.py commit b3e91ea58b024aa40807cc08d024f33415f10f2f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 29 00:00:18 2021 -0500 Update shock_fcns.py commit b8744a1ad214bb90bb619a08a289e9f4cc833e10 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Mar 28 23:43:34 2021 -0500 Update shock_fcns.py Updated shock solver to match paper commit 198eabf72084b04569be942e3d4d88521eba6c21 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 22 00:03:21 2021 -0500 Troe Falloff functional Fitting the falloff parameters is working. Need to think more about initial parameters and see if I can fit LPL and HPL at the same time(this makes PLOGS work) commit 31fb75facfc140adbeaa2fe8ddcbdfaa2dc4e48e Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Mar 11 16:29:19 2021 -0600 Bug Fixes commit 9bbdb0689855d38af559d625258782d2a1cff4e1 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 23:24:09 2021 -0600 Update mech_fcns.py Accidentally removed sleep for mech changing. This must remain until incident shock reactor is rewritten commit 4a6efa520b4048d2fc22ca6733d6da97ee279de1 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 23:16:01 2021 -0600 Update mech_optimize.py Automatically set minimum time between plots when optimizing commit c177ed6e58ff468a0bc2e75cd532ce8e8d245601 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 22:46:13 2021 -0600 Plotting Improved Set plots to only draw if they are being shown Set minimum time since last draw for optimization End result: Much faster plotting and program does not appear to hang like before commit 397457dbf477f76da4c967f2fa5b9f23a2cc7a61 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Mar 9 00:16:23 2021 -0600 Calculated Troe Derivatives commit 43a47e944335b0b9316afa63f6f208dd044a4611 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Mar 6 13:47:07 2021 -0600 Bug Fixes commit f0899df6ca3b020e073b843a3eff498f700a6a4e Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 28 20:27:18 2021 -0600 Fixed Branch Errors commit 687737fe016dd65faacf9d96bbc26df80bfc5121 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 28 18:28:40 2021 -0600 Troe Tinkering * Bug Fixing Fixing issues with optimization * Squashed commit of the following: commit 7c78008561899297477be22b2d6789d8d9be2619 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jan 17 14:51:38 2022 -0600 Update main.py commit 9aee5641e96c5c94c7cf3398e1c8f6c6c113c738 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 13:02:15 2021 -0600 Version check bug fix commit 703d2253328b5f1e59a85e04cb57552a1d0a8c53 Merge: 0634337 7b12b8e Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 12:55:21 2021 -0600 Merge branch 'Troe_opt' of https://github.com/tsikes/Frhodo into Troe_opt commit 063433798bdf084fd79bb37c79b8783ace8375d3 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 12:55:18 2021 -0600 Moving Loss Partition Function TCK commit 7b12b8eceaa5f742e3453bd75d8caa3e69b8c703 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 11:47:03 2021 -0600 Working on Bug Fix New bug with RBFopt when using as executable. Working on fixing. (command window is flashing each iteration) commit 681051d2afc94aeba90101e686bab51d199d5d32 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Thu Nov 4 16:20:59 2021 -0500 Bug Fix Fixed CheKiPEUQ interface issues commit 56645923f1229a17d3f6ac50b4cdce3ebbb91580 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Nov 4 11:01:42 2021 -0500 Bug Fix Fixed bugs with secondary y axis in sim explorer. commit 78215871dec25a997e24ab2f4055356f78d6a3f7 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Wed Nov 3 19:46:04 2021 -0500 Update fit_fcn.py commit f9d11e87d980b7a428c61a5863ae02d5e84d98d9 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Wed Nov 3 18:09:27 2021 -0500 Optimization Bug Fixing Fixed setting rate parametrization constant uncertainties to zero for residual based method. commit 7bd4d647cb0e54afa40e3703bd490162f866e189 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Nov 2 15:52:28 2021 -0500 Minor changes Consolidated bisymlog scaling factor Added new % abs density gradient to options commit cf6328b0e9038f68a3a34f51a2d628d95dcd2912 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 1 16:22:59 2021 -0500 Bug Fixing Fixing bugs in explorer widget/base plot with limit setting and widget type not defined commit 15b2acb593cdd15b91c80744ee109665499b5c7f Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Oct 4 21:56:17 2021 -0500 Minor Update Fixed Torr added bisymlog to opt type Fixed plot error where left limit == right limit commit 943ad8e2950b9c0267a5af2176ce846a243b8aca Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Sep 23 16:36:40 2021 -0500 Update for New Tranter Exp Format New format does not have tOpt/PT spacing but instead gives velocity commit 0eb48ce2e16c69cf8387aadeb94f5a0af17bcc68 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Sun Sep 19 22:55:06 2021 -0500 Opt Update Moved bisymlog out of multiple locations into convert_units Changed calculate residuals log to bisymlog commit 5268062afa75b854fe0eb5e3e25cebd0e46b6b58 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Sep 16 15:35:24 2021 -0500 Changed Fit Function Changed fit function to mean instead of median Changed so that Bayesian doesn't include penalty function at end. Need to check commit 3df584d3ea1d0896d96aa4bcda3dc8072f4fd1c0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 22:15:05 2021 -0500 Update fit_fcn.py commit a507f938afdf23a5210739eab9eccd3a33b02dee Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 19:30:38 2021 -0500 Changed adaptive loss function Adaptive loss function now optimizes inside of rate optimization loop. It's much more efficient. It also means each individual experiment has it's own loss alpha commit bc66027fd2576fbfc3c403be1b6b3cd404d3d842 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 16:58:18 2021 -0500 Fixed usage of C in loss function/GUI commit 42d82ff555bb864bb430a756691a41597473fe9a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 18:26:13 2021 -0500 Modified loss function Removed prior scaling from loss function to bring it back to publication formula commit d0d43dc453e170d8f7e6b6d57ff63877f17aa27f Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 16:57:00 2021 -0500 Tinkering with generalized loss func commit 4702ef3d70e4aacedae1ce39b738bc88c9756acb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 15:51:28 2021 -0500 Changed to adaptive loss function The shape value is now broken into an optimized parameter for inside experiments and between experiments commit 60e782f43ddc6f473a3b7773167bdb46aaa950e2 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 23:54:22 2021 -0500 Update loss_integral.py commit d093dda5fe90a34d197a9842dcf2a63c8e8b7433 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 22:45:04 2021 -0500 loss integral fitting commit 774f70d4ae07539a1153fda94e827beb2e8b1199 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 17:41:35 2021 -0500 loss function changes commit cba7b9e5638efa58970aa93506218514e170c410 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 13:30:17 2021 -0500 Experiment import update Updated experimental conditions import to work for old Tranter style experiment files commit 016ea7ca63c4af2155d1964c1d01d982dfc308bb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 20 16:31:40 2021 -0500 Create fit_coeffs_pygmo.py commit b95d5a69e6c8784f2182f9d041dd55dd7aa24786 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 20 16:29:08 2021 -0500 Rollback Rolling back to working CRS2 Troe opt commit 3c577e96ed87fb79a47555acf06d3ebdb65ab541 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 19 14:59:26 2021 -0500 Testing Troe Opt commit 553887b648b4f9e6ff910abdbeea81ddfe44cfbb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 19 01:36:54 2021 -0500 Update fit_coeffs.py commit cbfaa20b587b940dc3620f599a3facb7dd1990f7 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Aug 18 16:28:28 2021 -0500 Working on implementing Augmented Lagrangian commit 520f822e5f61808140d047a8b9f5049ae9e667cc Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Aug 18 12:44:48 2021 -0500 Small changes bonmin path wasn't working with network path as string reduced min_T_range changing Troe eqn to be continuous for all Fcent values and other cases commit ab97a6def6af9a57731dcf88c5fefc88d175d1ce Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 22:52:25 2021 -0500 Moving ipopt and bonmin commit 3accc3b72cc0e69b5944dda58695470e87013277 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 22:41:46 2021 -0500 Including bonmin and ipopt for rbfopt commit 3289e0ddd4a18f60d3c5dc7be21879e0cd206fc5 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 21:12:21 2021 -0500 Working on Pygmo and RBFOpt commit dc91f303cffb52aacf36acf123bfcaee4e0ddd15 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 15 21:55:11 2021 -0500 Troe updates Implemented genetic algorithms into GUI Sped GA's up through Numba Sped GA's up by only using DIRECT_L instead of CRS2. This is less accurate but much faster commit 91cbd0f243c0d31751ce37c868cc2d93d7ce9e8a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 15 01:08:16 2021 -0500 Implementing genetic algorithms commit 30ad17b43445e0b0d2f4dd6d616eb7a53487370b Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 12 08:56:46 2021 -0500 Minimally working Troe commit 82ee3a198fd4d153f7877887aa710af4b7617714 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 23:57:46 2021 -0500 Update fit_coeffs.py Minor bug catch commit 0d89b688dad5c8289de4e803c9664dc358f6eccb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 23:26:30 2021 -0500 Maybe working? commit 5e69b6974732ce2ed7262436da72ee026dfb8c7f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 21:44:16 2021 -0500 Update fit_coeffs.py Almost ready to test, but switching to optimizing arrhenius parameters instead of lpl, hpl rates commit c3a5098c2c405c8a62a291302014116ceb595e83 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 6 09:54:08 2021 -0500 Tinkering with ranges and bounds commit 44d7ddb38b1706c54f68cd829866e9f83df8173c Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 3 14:55:30 2021 -0500 Troe opt changes Troe opt changes. fixed bug from updating dependencies commit 3735e38c33333e09b81d9950e5ccb452dfec82db Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 1 20:44:04 2021 -0500 Redoing Troe Fit Redoing fit. Need to do more work on constraints of fitting LPL, HPL, Fcent commit 64070580655a1df81ee7c6cd7f1d38aa7d0a49ff Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jul 29 10:20:16 2021 -0500 Troe Fitting - Nonfunctional Redoing Troe Fitting to be similar to PLOG -> Troe. commit d5731fc23e48ad7f74b468ae9637c68abb71b725 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jul 6 15:28:43 2021 -0500 Resample Nonuniform Data Resamples nonuniform data for uncertainty shading smoothing commit 4e113be44a96730586f92ee1b5b3f17ad3cae9f0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jul 5 18:52:25 2021 -0500 Update options_panel_widgets.py Enable/Disable wavelet levels input box accordingly commit cf67025f6228c6b8e5945401028aac9db2d6ee39 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jul 5 18:42:40 2021 -0500 Implemented unc shading over data commit 2fa08582d44fbc97fd01e2657664cd0eeddcdc1a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Jul 2 13:59:59 2021 -0500 Minor update Added error checking for 0D reactors to prevent crashing commit 5abded10b0b27beac16f61728fa375fe792e0ce8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Jun 30 20:25:17 2021 -0500 Bug Fix Fixed some convergence issues related to outlier determination commit 9e268a8ac9594a7d8dae86a089365f8ee5453618 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 29 21:29:42 2021 -0500 Working Troe Optimization Refactored Troe fitting into more legible classes. Enabled multiprocessing Beginning Testing commit 951efd3b251ea1d40abc1dc3aecc542620b977e8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Jun 26 23:03:43 2021 -0500 Working Basic optimization is working commit e672ca027fd5303045f8efeda6858d81207803a7 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Jun 26 00:12:41 2021 -0500 Sort of working Have nlopt working for fcent commit 2dbb375cb1a3ac4c5ec989635400fbf2fc8afa96 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 24 17:19:18 2021 -0500 Update fit_coeffs.py Added constraints, but not fitting well with nlopt commit d9302247552b820f03303a0632ad7bbb1f2ae5a4 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 24 10:33:31 2021 -0500 Semi working Troe is working, but need to implement constraints on Fcent fitting. Should be working for Plog -> Troe after implementing those constraints commit afdcdbc5955798bf68f900ffe432fe2825c8e131 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 22 15:26:13 2021 -0500 Reproducing Troe Ok commit 9d080529713b8ac53c519379b79804c0e8bb3ba6 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jun 21 22:49:49 2021 -0500 Update fit_coeffs.py commit 50265c672547ef8d611abca3654020efc08b048d Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jun 21 14:27:55 2021 -0500 Tinkering commit ee6501edf5675db221420e98b15f0aa3242e1dd2 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Jun 20 21:05:57 2021 -0500 New Plog Fit Method commit 3f5619e669e0485c5b06861fb82797bb754aaf82 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 17 13:39:16 2021 -0500 Nothing works commit 568ea11ecd5be87afa3c573305fecae487c4b3af Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 8 17:27:22 2021 -0500 PLOG working? commit e9d957f823be4b4129c56ccbc11db135a0b20a85 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon May 3 22:26:22 2021 -0500 Works for PLOG but badly fitting commit 9fb5d64a3e22beec5b54e5f8b42a2bacad288b1f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon May 3 22:06:08 2021 -0500 PLOG Residual Working commit 292d68fb38e29211b22716a567025f7e7a9657c0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Apr 11 22:13:03 2021 -0500 Troe Progress commit 6867bf67882ed4a45ce652debc49910536d4122d Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Apr 7 18:22:37 2021 -0500 Update fit_coeffs.py commit 36b155d038fddf8bb32da3fefc35e8840134810a Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Apr 7 15:07:54 2021 -0500 More Progress commit e5dec8fae959c28662a911902f301c206f8e6366 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Apr 6 22:12:52 2021 -0500 Making Progress commit 0c5052034219462ab6b8bb83a5217dc3332bacdb Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Apr 6 15:01:09 2021 -0500 Tinkering commit 47bc21dcbc99288242ca3b2ffdddd541e668024a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Apr 5 18:34:20 2021 -0500 Working commit add6d9dd5681a0c390958a2b85e2b49b11d39570 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Apr 5 13:48:45 2021 -0500 Big Changes Refactored calculation-type functions Set mechanism now generates mechanism programmatically rather than from yaml text in memory. This is necessary to be able to switch reaction types commit 70cab313f211c86434cb79b710f90003b63b51a2 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 31 16:56:05 2021 -0500 set_mechanism changes Working on changing reaction types. Side benefit will be faster initialization during optimization commit f6822d93a11368ced34ea47543ccf43f74fb6422 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 31 13:34:46 2021 -0500 Update misc_fcns.py commit cef069ba46c9575a21dd39506aa9540a40bc3892 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 29 14:49:16 2021 -0500 Update shock_fcns.py commit b3e91ea58b024aa40807cc08d024f33415f10f2f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 29 00:00:18 2021 -0500 Update shock_fcns.py commit b8744a1ad214bb90bb619a08a289e9f4cc833e10 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Mar 28 23:43:34 2021 -0500 Update shock_fcns.py Updated shock solver to match paper commit 198eabf72084b04569be942e3d4d88521eba6c21 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 22 00:03:21 2021 -0500 Troe Falloff functional Fitting the falloff parameters is working. Need to think more about initial parameters and see if I can fit LPL and HPL at the same time(this makes PLOGS work) commit 31fb75facfc140adbeaa2fe8ddcbdfaa2dc4e48e Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Mar 11 16:29:19 2021 -0600 Bug Fixes commit 9bbdb0689855d38af559d625258782d2a1cff4e1 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 23:24:09 2021 -0600 Update mech_fcns.py Accidentally removed sleep for mech changing. This must remain until incident shock reactor is rewritten commit 4a6efa520b4048d2fc22ca6733d6da97ee279de1 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 23:16:01 2021 -0600 Update mech_optimize.py Automatically set minimum time between plots when optimizing commit c177ed6e58ff468a0bc2e75cd532ce8e8d245601 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 22:46:13 2021 -0600 Plotting Improved Set plots to only draw if they are being shown Set minimum time since last draw for optimization End result: Much faster plotting and program does not appear to hang like before commit 397457dbf477f76da4c967f2fa5b9f23a2cc7a61 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Mar 9 00:16:23 2021 -0600 Calculated Troe Derivatives commit 43a47e944335b0b9316afa63f6f208dd044a4611 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Mar 6 13:47:07 2021 -0600 Bug Fixes commit f0899df6ca3b020e073b843a3eff498f700a6a4e Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 28 20:27:18 2021 -0600 Fixed Branch Errors commit 687737fe016dd65faacf9d96bbc26df80bfc5121 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 28 18:28:40 2021 -0600 Troe Tinkering commit de2f6d6c72afa2825a718dde98794ee93c3c61c8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 24 21:06:09 2021 -0600 SRI progress commit 55c92eaabb86acc4efad28c89e7f8872e6946413 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 24 17:07:25 2021 -0600 Update base_plot.py commit a99ae6949523491a828c091c95b7a6d43381a4c0 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 24 12:11:30 2021 -0600 Update base_plot.py commit edbc062de10439f61bb82980e5a21e6963811feb Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 23 17:24:00 2021 -0600 Not going well commit 738a410f596821b8539a911c963ff33bc3312750 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Feb 22 22:08:43 2021 -0600 New SRI fit Fitting a,b then all commit e45563872533d08db378c581b2d5bf58b87bd285 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 21 21:02:36 2021 -0600 SRI - not much progress commit 5889aec780ca4e1cbce370d6943874f6277d9e69 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 21 01:05:46 2021 -0600 More SRI fun commit fb0380db74bcc20d98712b15386c9675a55848ee Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 21 00:27:41 2021 -0600 Working on SRI commit 665ac25ef0fd65db042893e82585c8086014fbb3 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Feb 18 18:07:54 2021 -0600 Working on nlopt SRI commit a2593ea4e7e2fe31f0c56c24e223c72182d59cef Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 17 17:49:32 2021 -0600 More SRI Fitting commit f901332049c809c0b2415ca9023d6e31d88c9789 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 16 22:36:23 2021 -0600 SRI fitting commit f3d33a6f4cde8c68c1b3292fdf79363b35e4bc79 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 16 17:45:57 2021 -0600 SRI fitting progress commit 3b46b21395a556b2965233961ac080aab8189ae8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 16 10:00:37 2021 -0600 Working on Falloff Fits commit 5dab2bff3623e5bedd115384ee0e9c7ec61ff76c Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Feb 15 22:38:09 2021 -0600 Working on falloff Need to change mech to SRI Need to save mech as yaml for reverting back to later commit b67d2b1dd4cfe59943680522e005d8c1579060ab Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 14 22:11:16 2021 -0600 Working on SRI Fit commit 0aac366aada55b2f552192605e285802bcc6b805 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 14 15:47:26 2021 -0600 Troe kinda works Troe kinda works but it's very slow to fit the coefficients. I'm going to switch to SRI and hope for a faster convergence time to fitting the coefficients commit e40d5860ca5495f677aeb4c64117b52a49355508 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Feb 13 00:33:33 2021 -0600 Troe kind of working commit bbbed46a06bf6770a7cf73e8abe2880a8f731647 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Thu Feb 11 16:12:14 2021 -0600 Arrhenius Optimization working again Arrhenius working Work to be done with Falloff commit 308b5b37bbe290275fe6334411131090d1f8a96d Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 10 17:03:38 2021 -0600 Backend mech work Changing arrhenius coeffs/coef_bnds to be an item in a list to better match plog and falloff rates commit f1b2a0c0b3d70546bd361982e0af7bd41bf1274f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 9 22:05:26 2021 -0600 More back end work on falloff/plogs commit c1c41c3b8698f6bbc1e0237765e3db84e341f465 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 9 17:20:14 2021 -0600 Bug fix and mech.reset work Fixed mechanism double loading from setting use_thermo_file_box programmatically mech.reset now includes Plog and Falloff properly. commit 3b89e99654216a6dcb5eaac8a62d284c7fbe406f Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 9 16:29:06 2021 -0600 Backend mech structures created Arrhenius optimization functioning again commit a9c0af4e5e5a064af03a9a08461b090c88ec7db5 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Feb 8 21:18:39 2021 -0600 Nothing Changing Co-authored-by: Aditya Savara <39929571+AdityaSavara@users.noreply.github.com> commit c54f4c1201424387e194a080a633bf05a5ab219c Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jan 17 17:07:47 2022 -0600 Update from tsikes/Frhodo (#3) * Soln2ck Fix Fixed bug in writing thermo if no note exists (like writing from a 'gri30.cti') * Heat Release Rate Added heat release rate as both an observable and in the sim explorer * Example Directory Changed and FirstTimeUse.docx made. * Update FirstTimeUse.docx * removing opt mech files * Update fit_fcn.py Replace all for following: change calculate_residuals to calculate_objective_function change calc_resid_output to calc_objective_function_output * Update fit_fcn.py * Update fit_fcn.py changing variable names in fit_fcn and separating the bayesian case into an if statement. * Update fit_fcn.py * Update fit_fcn.py * Update fit_fcn.py Making single return for verbose versus normal case. * Made CheKiPEUQ_from_Frhodo helper module and also put code into fit_fcn Right now, these codes are not working. It is just the beginning. * Update fit_fcn.py * Create ExampleConfig.ini * add comments to fit_fcn.py about what is fed in by "args_list" * Update fit_fcn.py * renaming obs to obs_sim in fit_fcn * Update fit_fcn.py * get_varying_rate_vals_and_bnds * get_varying_rate_vals_and_bnds * pars_uncertainty_distribution code * newOptimization field for creating PE_object. * Added "Force Bayesian" * minor changes * Update CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Adjusting observed_data and responses shapes for CheKiPEUQ * got the weightings multiplication to work after transposing and transposing back * typo correction * update get_last_obs_sim_interp * moved CheKiPEUQ_PE_object creation into time_adjust_func * Update fit_fcn.py * working on get_log_posterior_density and getting varying_rate_vals * Update fit_fcn.py * forcing 'final' after minimization to be 'residual' * switch to negative log P for objective_function_value * Trying to allow Bayesian way to get past the "QQ" error by feeding residual metrics. * trying 10**neg_logP in lieu of loss_scalar. * Setting forceBayesian * adding Bayesian_dict to make code easier to follow * Separating Bayesian into 5 steps to simplify things for Travis * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt… * Squashed commit of the following: commit 5e39ec2191bfbed08ce7d2877f2fb7f1d7da24bd Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Sun Oct 16 16:28:13 2022 -0500 Update package dependencies commit b4854765ecc4afdd3486599b8c57565f6cd23ada Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Sun Oct 16 16:20:02 2022 -0500 Update options_panel_widgets.py Minor bug fix. Newer Qt expects int here commit 2cfe699b7c8defff89ba872dc2d9c87fa4f647ff Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 21 01:05:26 2022 -0500 Version Bump commit 97d59d0d67aeba280763fea7d282e306a2fa6751 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 21 01:04:05 2022 -0500 Bug Fix * Soln2ck Fix Fixed bug in writing thermo if no note exists (like writing from a 'gri30.cti') * Heat Release Rate Added heat release rate as both an observable and in the sim explorer * Example Directory Changed and FirstTimeUse.docx made. * Update FirstTimeUse.docx * removing opt mech files * Update fit_fcn.py Replace all for following: change calculate_residuals to calculate_objective_function change calc_resid_output to calc_objective_function_output * Update fit_fcn.py * Update fit_fcn.py changing variable names in fit_fcn and separating the bayesian case into an if statement. * Update fit_fcn.py * Update fit_fcn.py * Update fit_fcn.py Making single return for verbose versus normal case. * Made CheKiPEUQ_from_Frhodo helper module and also put code into fit_fcn Right now, these codes are not working. It is just the beginning. * Update fit_fcn.py * Create ExampleConfig.ini * add comments to fit_fcn.py about what is fed in by "args_list" * Update fit_fcn.py * renaming obs to obs_sim in fit_fcn * Update fit_fcn.py * get_varying_rate_vals_and_bnds * get_varying_rate_vals_and_bnds * pars_uncertainty_distribution code * newOptimization field for creating PE_object. * Added "Force Bayesian" * minor changes * Update CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Adjusting observed_data and responses shapes for CheKiPEUQ * got the weightings multiplication to work after transposing and transposing back * typo correction * update get_last_obs_sim_interp * moved CheKiPEUQ_PE_object creation into time_adjust_func * Update fit_fcn.py * working on get_log_posterior_density and getting varying_rate_vals * Update fit_fcn.py * forcing 'final' after minimization to be 'residual' * switch to negative log P for objective_function_value * Trying to allow Bayesian way to get past the "QQ" error by feeding residual metrics. * trying 10**neg_logP in lieu of loss_scalar. * Setting forceBayesian * adding Bayesian_dict to make code easier to follow * Separating Bayesian into 5 steps to simplify things for Travis * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Create CiteSoftLocal.py * Update CiteSoft call for CheKiPEUQ * Disabling CiteSoft exportations for now * Update CheKiPEUQ_integration_notes.txt * Variable name changes (#2) * Variable name changes Moved image assets and added a GUI screenshot Replace all for following: change calculate_residuals to calculate_objective_function change calc_resid_output to calc_objective_function_output changing variable names in fit_fcn and separating the bayesian case into an if statement. Making single return for verbose versus normal case. Co-authored-by: Aditya Savara <39929571+AdityaSavara@users.noreply.github.com> * CheKiPEUQ_integration Merging from Ashi's branch Modified GUI to include variables needed for Bayesian optimization * Merge CheKiPEUQ local (#5) changing variable names in fit_fcn and separating the bayesian case into an if statement. Making single return for verbose versus normal case. Made CheKiPEUQ_from_Frhodo helper module and also put code into fit_fcn - Right now, these codes are not working. It is just the beginning. Create ExampleConfig.ini add comments to fit_fcn.py about what is fed in by "args_list" renaming obs to obs_sim in fit_fcn * Update fit_fcn.py * get_varying_rate_vals_and_bnds * get_varying_rate_vals_and_bnds * pars_uncertainty_distribution code * newOptimization field for creating PE_object. * Added "Force Bayesian" * minor changes * Update CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Adjusting observed_data and responses shapes for CheKiPEUQ * got the weightings multiplication to work after transposing and transposing back * typo correction * update get_last_obs_sim_interp * moved CheKiPEUQ_PE_object creation into time_adjust_func * Update fit_fcn.py * working on get_log_posterior_density and getting varying_rate_vals * Update fit_fcn.py * forcing 'final' after minimization to be 'residual' * switch to negative log P for objective_function_value * Trying to allow Bayesian way to get past the "QQ" error by feeding residual metrics. * trying 10**neg_logP in lieu of loss_scalar. * adding Bayesian_dict to make code easier to follow * Separating Bayesian into 5 steps to simplify things for Travis * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Create CiteSoftLocal.py * Update CiteSoft call for CheKiPEUQ * Disabling CiteSoft exportations for now * Update CheKiPEUQ_integration_notes.txt * adding CheKiPEUQ local * CheKiPEUQ_local * removing things not needed from CheKiPEUQ * try moving CheKiPEUQ_local * parmaeter_estimation class still not being found for CheKiPEUQ_local Specifically from inside CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Trying to use only CheKiPEUQ_local * Merge Fix Co-authored-by: Aditya Savara <39929571+AdityaSavara@users.noreply.github.com> * Another Merge Fix * Forced Bayesian Removed Distribution type and how many sigma the unc represents is now reachable in var * update comment in fit_fcn.py * manually updating fit_fcn.py from Travis's CheKiPEUQ_integration branch Github was not allowing the merge properly, so doing this change manually before merging. * Update CheKiPEUQ_from_Frhodo from the CheKiPEUQ_Integration branch * GUI Update Bayesian tab now hidden when residuals are checked. Uncertainty/Weights table shows depending upon selection of Residual/Bayesian * Bayesian Changes Changes to Optimization tab GUI Reverting some changes in fit_fcn now that CheKiPEUQ will be called after simulations. Prior naming was more correct * Created CheKiPEUQ Dictionary * Minor Changes * Update fit_fcn.py * Update fit_fcn.py * creating get_last_obs_sim_interp and also working on Bayesian_dict * Update fit_fcn.py * Update fit_fcn.py * Adding in more Bayesian_dict arguments. (#6) * Update README.rst Moved image assets and added a GUI screenshot * Update README.rst * Moved to Assets Branch * Update fit_fcn.py Replace all for following: change calculate_residuals to calculate_objective_function change calc_resid_output to calc_objective_function_output * Update fit_fcn.py * Update fit_fcn.py changing variable names in fit_fcn and separating the bayesian case into an if statement. * Update fit_fcn.py * Update fit_fcn.py * Update fit_fcn.py Making single return for verbose versus normal case. * Made CheKiPEUQ_from_Frhodo helper module and also put code into fit_fcn Right now, these codes are not working. It is just the beginning. * Update fit_fcn.py * Create ExampleConfig.ini * add comments to fit_fcn.py about what is fed in by "args_list" * Update fit_fcn.py * renaming obs to obs_sim in fit_fcn * Update fit_fcn.py * get_varying_rate_vals_and_bnds * get_varying_rate_vals_and_bnds * pars_uncertainty_distribution code * newOptimization field for creating PE_object. * Added "Force Bayesian" * minor changes * Update CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Adjusting observed_data and responses shapes for CheKiPEUQ * got the weightings multiplication to work after transposing and transposing back * typo correction * update get_last_obs_sim_interp * moved CheKiPEUQ_PE_object creation into time_adjust_func * Update fit_fcn.py * working on get_log_posterior_density and getting varying_rate_vals * Update fit_fcn.py * forcing 'final' after minimization to be 'residual' * switch to negative log P for objective_function_value * Trying to allow Bayesian way to get past the "QQ" error by feeding residual metrics. * trying 10**neg_logP in lieu of loss_scalar. * adding Bayesian_dict to make code easier to follow * Separating Bayesian into 5 steps to simplify things for Travis * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt * Create CiteSoftLocal.py * Update CiteSoft call for CheKiPEUQ * Disabling CiteSoft exportations for now * Update CheKiPEUQ_integration_notes.txt * update comment in fit_fcn.py * manually updating fit_fcn.py from Travis's CheKiPEUQ_integration branch Github was not allowing the merge properly, so doing this change manually before merging. * Update CheKiPEUQ_from_Frhodo from the CheKiPEUQ_Integration branch * Update fit_fcn.py * Update fit_fcn.py * creating get_last_obs_sim_interp and also working on Bayesian_dict * Update fit_fcn.py * Update fit_fcn.py Co-authored-by: Travis Sikes <50559900+tsikes@users.noreply.github.com> * Update fit_fcn.py * Save coefficient x0 and bnds in optimization Also added Automatic as an option for uncertainty distribution * Added a couple of missed values * Update fit_fcn.py Manually making them match * making get_consolidated_parameters_arrays * Bayesian_dict parsing of initial guesses and bounds added * for params, deepcopy was needed, and added print lines. * Adding new opt settings Also spent a long time fixing the scientific spinbox * Hall of Fame Added * Che ki peuq integration v3 (#7) * making get_consolidated_parameters_arrays * Bayesian_dict parsing of initial guesses and bounds added * for params, deepcopy was needed, and added print lines. Co-authored-by: Travis Sikes <50559900+tsikes@users.noreply.github.com> * Fixed bounds provided to CheKiPEUQ * CheKiPEUQ Fixes * Fixing the ligical error for lower bound. I fixed the logical error, but line 212 is now printing this: line 212 [True, True, True] [0, 2.4424906541753446e-16, -1.7976931348623155e+288] -1.7976931348623155e+288 That means that the comparison is not working correctly for min_neg_system value. In the next commit, I'm going to use the 1E99 way of doing things. * Changed to -1E99 and +1E99 check SOMEWHAT SURPRISINGLY, THE COMPARISON IS STILL FAILING. -1E288 > -1E99 is returning True. * fixing comparisons: There was actually an "abs" I had not noticed. * extra space deleted * Cleaning up Boxes in uncertainty function now functional Also have further modified scientificspinbox * cleanup & working towards inclusion of pars_bnds_exist * rate_constants_parameters_bnds seem to be parsed and passed correctly to CheKiPEUQ * Adding in unbounded_indices and return_unbounded_indices code * minor syntax fixes * Added remove_unbounded_values calls to code to truncate arrays etc. * Moved more of the Bayesian_Dict fields population to init * Cleaning up print statements * Fixing rate_constants_parameters_bnds_exist for multiple rate constants * Update CheKiPEUQ_integration_notes.txt * Che ki peuq integration v3 (#8) * making get_consolidated_parameters_arrays * Bayesian_dict parsing of initial guesses and bounds added * for params, deepcopy was needed, and added print lines. * Fixing the ligical error for lower bound. I fixed the logical error, but line 212 is now printing this: line 212 [True, True, True] [0, 2.4424906541753446e-16, -1.7976931348623155e+288] -1.7976931348623155e+288 That means that the comparison is not working correctly for min_neg_system value. In the next commit, I'm going to use the 1E99 way of doing things. * Changed to -1E99 and +1E99 check SOMEWHAT SURPRISINGLY, THE COMPARISON IS STILL FAILING. -1E288 > -1E99 is returning True. * fixing comparisons: There was actually an "abs" I had not noticed. * extra space deleted * cleanup & working towards inclusion of pars_bnds_exist * rate_constants_parameters_bnds seem to be parsed and passed correctly to CheKiPEUQ * Adding in unbounded_indices and return_unbounded_indices code * minor syntax fixes * Added remove_unbounded_values calls to code to truncate arrays etc. * Moved more of the Bayesian_Dict fields population to init * Cleaning up print statements * Fixing rate_constants_parameters_bnds_exist for multiple rate constants * Update CheKiPEUQ_integration_notes.txt Co-authored-by: Travis Sikes <50559900+tsikes@users.noreply.github.com> * Residual Optimization Update Needed to account for exp residuals not having a zero average Changed to using medians instead of means * More Loss Function Changes * Oops * Bayesian Obj Func Change Changed from -1*posterior to -1/posterior * Bounds Error Fix * Update Update default config parameters and fix bayesian obj_fcn back to -1* * Trying new fonts for log * ChekiPEUQ: First attempt to implement variance per response for too long * Linear scaling working now, but some 'iterations' of Frhodo are freezing during CheKiPEUQ's PE_object init. I am trying to track down the reason. It is somewhere in the responses_observed_uncertainties manipulation during CheKiPEUQ's init. * Reduced the slowdown to the extent of not freezing I found that the slowdown was (surprisingly) caused in the loop that tried to convert non-zero weightings into tiny finite weightings. Changing the code to use machine epsilon in one of the steps rather than minValue/1E6 made that loop faster (completely sure why). The shape of the weightings looks odd. In the next commit I will try to print out some array shapes. * Printing shows that the weighting array shape going into CheKiPEUQ is correct. * Fixed shape of weighting arrray passed to CheKiPEUQ Now need to do cleanup of print statements in next commit. * Removing excess print statements and cleaning up. * Improving heuristic slightly * Updating version number and fixing minor error from the recent edits. * Added automatic copy rates to optimize * Cleaning up the merge conflict that was dropped Removing some print statements and adding a .flatten() that is needed when analyzing multiple experiments. * Remoing print statement from CheKiPUEQ_local * Weights Work * Style Changes * Silly Math Error * CheKiPEUQ changes CheKiPEUQ was checking bounds after they had already been enforced. Differing ways of checking bounds was causing CheKiPEUQ to throw -inf. CheKiPEUQ no longer checks bounds CheKiPEUQ obj_obj function no longer shows raw -1*log_posterior_density, but is instead the relative change from the initial guess. This should not alter convergence or how it runs, but it does make it easier to see how the value is changing. * Minor fixes * Update fit_fcn.py * Nomenclature Change * Refactoring Refactoring to clean up fit_fcn Also changing imports slightly * Preliminary Signal Plot Changes * Weight Function Update Further abstracting weight function for uncertainty implementation Renamed CheKiPEUQ interface class * Implemented Uncertainty in Observable Data Implemented GUI elements and linked to CheKiPEUQ. Uncertainty is %. Need to handle uncertainties better in opt. Log bayesian is broken * Bayesian Opt Uncertainty Working Bayesian uncertainty now functioning for log scale * Added Shaded Unc Shading has a gradient, will likely remove for speed * Uncertainty Shading I prefer this version. It's quicker without much loss. Need to consider adding absolute uncertainties * Added Abs Uncertainty * Modifications to absolute uncertainty Added uncertainty type choice to saved variables Refactored OoM to convert_units * Axes update in draggable x and y axes now are animated objects that update with draggable * GUI falloff/pressure dependent uncertainty * Implementing rate coef/bnds on Plogs and falloff eqns * Troe Optimization Backend mech structures created Arrhenius optimization functioning again Bug fix and mech.reset work Fixed mechanism double loading from setting use_thermo_file_box programmatically mech.reset now includes Plog and Falloff properly. More back end work on falloff/plogs Backend mech work Changing arrhenius coeffs/coef_bnds to be an item in a list to better match plog and falloff rates Arrhenius Optimization working again Arrhenius working Work to be done with Falloff Troe kind of working Troe kinda works Troe kinda works but it's very slow to fit the coefficients. I'm going to switch to SRI and hope for a faster convergence time to fitting the coefficients Working on SRI Fit Working on falloff Need to change mech to SRI Need to save mech as yaml for reverting back to later Working on Falloff Fits SRI fitting progress SRI fitting More SRI Fitting Working on nlopt SRI Working on SRI More SRI fun SRI - not much progress New SRI fit Fitting a,b then all Not going well Update base_plot.py Update base_plot.py SRI progress Troe Optimization and Full Update Updated Environment and bugs popped up Quick fixes to address those Squashed commit of the following: commit 7c78008561899297477be22b2d6789d8d9be2619 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jan 17 14:51:38 2022 -0600 Update main.py commit 9aee5641e96c5c94c7cf3398e1c8f6c6c113c738 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 13:02:15 2021 -0600 Version check bug fix commit 703d2253328b5f1e59a85e04cb57552a1d0a8c53 Merge: 0634337 7b12b8e Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 12:55:21 2021 -0600 Merge branch 'Troe_opt' of https://github.com/tsikes/Frhodo into Troe_opt commit 063433798bdf084fd79bb37c79b8783ace8375d3 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 12:55:18 2021 -0600 Moving Loss Partition Function TCK commit 7b12b8eceaa5f742e3453bd75d8caa3e69b8c703 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 11:47:03 2021 -0600 Working on Bug Fix New bug with RBFopt when using as executable. Working on fixing. (command window is flashing each iteration) commit 681051d2afc94aeba90101e686bab51d199d5d32 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Thu Nov 4 16:20:59 2021 -0500 Bug Fix Fixed CheKiPEUQ interface issues commit 56645923f1229a17d3f6ac50b4cdce3ebbb91580 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Nov 4 11:01:42 2021 -0500 Bug Fix Fixed bugs with secondary y axis in sim explorer. commit 78215871dec25a997e24ab2f4055356f78d6a3f7 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Wed Nov 3 19:46:04 2021 -0500 Update fit_fcn.py commit f9d11e87d980b7a428c61a5863ae02d5e84d98d9 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Wed Nov 3 18:09:27 2021 -0500 Optimization Bug Fixing Fixed setting rate parametrization constant uncertainties to zero for residual based method. commit 7bd4d647cb0e54afa40e3703bd490162f866e189 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Nov 2 15:52:28 2021 -0500 Minor changes Consolidated bisymlog scaling factor Added new % abs density gradient to options commit cf6328b0e9038f68a3a34f51a2d628d95dcd2912 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 1 16:22:59 2021 -0500 Bug Fixing Fixing bugs in explorer widget/base plot with limit setting and widget type not defined commit 15b2acb593cdd15b91c80744ee109665499b5c7f Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Oct 4 21:56:17 2021 -0500 Minor Update Fixed Torr added bisymlog to opt type Fixed plot error where left limit == right limit commit 943ad8e2950b9c0267a5af2176ce846a243b8aca Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Sep 23 16:36:40 2021 -0500 Update for New Tranter Exp Format New format does not have tOpt/PT spacing but instead gives velocity commit 0eb48ce2e16c69cf8387aadeb94f5a0af17bcc68 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Sun Sep 19 22:55:06 2021 -0500 Opt Update Moved bisymlog out of multiple locations into convert_units Changed calculate residuals log to bisymlog commit 5268062afa75b854fe0eb5e3e25cebd0e46b6b58 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Sep 16 15:35:24 2021 -0500 Changed Fit Function Changed fit function to mean instead of median Changed so that Bayesian doesn't include penalty function at end. Need to check commit 3df584d3ea1d0896d96aa4bcda3dc8072f4fd1c0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 22:15:05 2021 -0500 Update fit_fcn.py commit a507f938afdf23a5210739eab9eccd3a33b02dee Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 19:30:38 2021 -0500 Changed adaptive loss function Adaptive loss function now optimizes inside of rate optimization loop. It's much more efficient. It also means each individual experiment has it's own loss alpha commit bc66027fd2576fbfc3c403be1b6b3cd404d3d842 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 16:58:18 2021 -0500 Fixed usage of C in loss function/GUI commit 42d82ff555bb864bb430a756691a41597473fe9a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 18:26:13 2021 -0500 Modified loss function Removed prior scaling from loss function to bring it back to publication formula commit d0d43dc453e170d8f7e6b6d57ff63877f17aa27f Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 16:57:00 2021 -0500 Tinkering with generalized loss func commit 4702ef3d70e4aacedae1ce39b738bc88c9756acb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 15:51:28 2021 -0500 Changed to adaptive loss function The shape value is now broken into an optimized parameter for inside experiments and between experiments commit 60e782f43ddc6f473a3b7773167bdb46aaa950e2 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 23:54:22 2021 -0500 Update loss_integral.py commit d093dda5fe90a34d197a9842dcf2a63c8e8b7433 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 22:45:04 2021 -0500 loss integral fitting commit 774f70d4ae07539a1153fda94e827beb2e8b1199 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 17:41:35 2021 -0500 loss function changes commit cba7b9e5638efa58970aa93506218514e170c410 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 13:30:17 2021 -0500 Experiment import update Updated experimental conditions import to work for old Tranter style experiment files commit 016ea7ca63c4af2155d1964c1d01d982dfc308bb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 20 16:31:40 2021 -0500 Create fit_coeffs_pygmo.py commit b95d5a69e6c8784f2182f9d041dd55dd7aa24786 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 20 16:29:08 2021 -0500 Rollback Rolling back to working CRS2 Troe opt commit 3c577e96ed87fb79a47555acf06d3ebdb65ab541 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 19 14:59:26 2021 -0500 Testing Troe Opt commit 553887b648b4f9e6ff910abdbeea81ddfe44cfbb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 19 01:36:54 2021 -0500 Update fit_coeffs.py commit cbfaa20b587b940dc3620f599a3facb7dd1990f7 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Aug 18 16:28:28 2021 -0500 Working on implementing Augmented Lagrangian commit 520f822e5f61808140d047a8b9f5049ae9e667cc Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Aug 18 12:44:48 2021 -0500 Small changes bonmin path wasn't working with network path as string reduced min_T_range changing Troe eqn to be continuous for all Fcent values and other cases commit ab97a6def6af9a57731dcf88c5fefc88d175d1ce Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 22:52:25 2021 -0500 Moving ipopt and bonmin commit 3accc3b72cc0e69b5944dda58695470e87013277 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 22:41:46 2021 -0500 Including bonmin and ipopt for rbfopt commit 3289e0ddd4a18f60d3c5dc7be21879e0cd206fc5 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 21:12:21 2021 -0500 Working on Pygmo and RBFOpt commit dc91f303cffb52aacf36acf123bfcaee4e0ddd15 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 15 21:55:11 2021 -0500 Troe updates Implemented genetic algorithms into GUI Sped GA's up through Numba Sped GA's up by only using DIRECT_L instead of CRS2. This is less accurate but much faster commit 91cbd0f243c0d31751ce37c868cc2d93d7ce9e8a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 15 01:08:16 2021 -0500 Implementing genetic algorithms commit 30ad17b43445e0b0d2f4dd6d616eb7a53487370b Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 12 08:56:46 2021 -0500 Minimally working Troe commit 82ee3a198fd4d153f7877887aa710af4b7617714 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 23:57:46 2021 -0500 Update fit_coeffs.py Minor bug catch commit 0d89b688dad5c8289de4e803c9664dc358f6eccb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 23:26:30 2021 -0500 Maybe working? commit 5e69b6974732ce2ed7262436da72ee026dfb8c7f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 21:44:16 2021 -0500 Update fit_coeffs.py Almost ready to test, but switching to optimizing arrhenius parameters instead of lpl, hpl rates commit c3a5098c2c405c8a62a291302014116ceb595e83 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 6 09:54:08 2021 -0500 Tinkering with ranges and bounds commit 44d7ddb38b1706c54f68cd829866e9f83df8173c Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 3 14:55:30 2021 -0500 Troe opt changes Troe opt changes. fixed bug from updating dependencies commit 3735e38c33333e09b81d9950e5ccb452dfec82db Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 1 20:44:04 2021 -0500 Redoing Troe Fit Redoing fit. Need to do more work on constraints of fitting LPL, HPL, Fcent commit 64070580655a1df81ee7c6cd7f1d38aa7d0a49ff Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jul 29 10:20:16 2021 -0500 Troe Fitting - Nonfunctional Redoing Troe Fitting to be similar to PLOG -> Troe. commit d5731fc23e48ad7f74b468ae9637c68abb71b725 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jul 6 15:28:43 2021 -0500 Resample Nonuniform Data Resamples nonuniform data for uncertainty shading smoothing commit 4e113be44a96730586f92ee1b5b3f17ad3cae9f0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jul 5 18:52:25 2021 -0500 Update options_panel_widgets.py Enable/Disable wavelet levels input box accordingly commit cf67025f6228c6b8e5945401028aac9db2d6ee39 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jul 5 18:42:40 2021 -0500 Implemented unc shading over data commit 2fa08582d44fbc97fd01e2657664cd0eeddcdc1a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Jul 2 13:59:59 2021 -0500 Minor update Added error checking for 0D reactors to prevent crashing commit 5abded10b0b27beac16f61728fa375fe792e0ce8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Jun 30 20:25:17 2021 -0500 Bug Fix Fixed some convergence issues related to outlier determination commit 9e268a8ac9594a7d8dae86a089365f8ee5453618 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 29 21:29:42 2021 -0500 Working Troe Optimization Refactored Troe fitting into more legible classes. Enabled multiprocessing Beginning Testing commit 951efd3b251ea1d40abc1dc3aecc542620b977e8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Jun 26 23:03:43 2021 -0500 Working Basic optimization is working commit e672ca027fd5303045f8efeda6858d81207803a7 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Jun 26 00:12:41 2021 -0500 Sort of working Have nlopt working for fcent commit 2dbb375cb1a3ac4c5ec989635400fbf2fc8afa96 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 24 17:19:18 2021 -0500 Update fit_coeffs.py Added constraints, but not fitting well with nlopt commit d9302247552b820f03303a0632ad7bbb1f2ae5a4 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 24 10:33:31 2021 -0500 Semi working Troe is working, but need to implement constraints on Fcent fitting. Should be working for Plog -> Troe after implementing those constraints commit afdcdbc5955798bf68f900ffe432fe2825c8e131 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 22 15:26:13 2021 -0500 Reproducing Troe Ok commit 9d080529713b8ac53c519379b79804c0e8bb3ba6 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jun 21 22:49:49 2021 -0500 Update fit_coeffs.py commit 50265c672547ef8d611abca3654020efc08b048d Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jun 21 14:27:55 2021 -0500 Tinkering commit ee6501edf5675db221420e98b15f0aa3242e1dd2 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Jun 20 21:05:57 2021 -0500 New Plog Fit Method commit 3f5619e669e0485c5b06861fb82797bb754aaf82 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 17 13:39:16 2021 -0500 Nothing works commit 568ea11ecd5be87afa3c573305fecae487c4b3af Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 8 17:27:22 2021 -0500 PLOG working? commit e9d957f823be4b4129c56ccbc11db135a0b20a85 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon May 3 22:26:22 2021 -0500 Works for PLOG but badly fitting commit 9fb5d64a3e22beec5b54e5f8b42a2bacad288b1f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon May 3 22:06:08 2021 -0500 PLOG Residual Working commit 292d68fb38e29211b22716a567025f7e7a9657c0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Apr 11 22:13:03 2021 -0500 Troe Progress commit 6867bf67882ed4a45ce652debc49910536d4122d Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Apr 7 18:22:37 2021 -0500 Update fit_coeffs.py commit 36b155d038fddf8bb32da3fefc35e8840134810a Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Apr 7 15:07:54 2021 -0500 More Progress commit e5dec8fae959c28662a911902f301c206f8e6366 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Apr 6 22:12:52 2021 -0500 Making Progress commit 0c5052034219462ab6b8bb83a5217dc3332bacdb Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Apr 6 15:01:09 2021 -0500 Tinkering commit 47bc21dcbc99288242ca3b2ffdddd541e668024a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Apr 5 18:34:20 2021 -0500 Working commit add6d9dd5681a0c390958a2b85e2b49b11d39570 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Apr 5 13:48:45 2021 -0500 Big Changes Refactored calculation-type functions Set mechanism now generates mechanism programmatically rather than from yaml text in memory. This is necessary to be able to switch reaction types commit 70cab313f211c86434cb79b710f90003b63b51a2 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 31 16:56:05 2021 -0500 set_mechanism changes Working on changing reaction types. Side benefit will be faster initialization during optimization commit f6822d93a11368ced34ea47543ccf43f74fb6422 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 31 13:34:46 2021 -0500 Update misc_fcns.py commit cef069ba46c9575a21dd39506aa9540a40bc3892 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 29 14:49:16 2021 -0500 Update shock_fcns.py commit b3e91ea58b024aa40807cc08d024f33415f10f2f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 29 00:00:18 2021 -0500 Update shock_fcns.py commit b8744a1ad214bb90bb619a08a289e9f4cc833e10 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Mar 28 23:43:34 2021 -0500 Update shock_fcns.py Updated shock solver to match paper commit 198eabf72084b04569be942e3d4d88521eba6c21 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 22 00:03:21 2021 -0500 Troe Falloff functional Fitting the falloff parameters is working. Need to think more about initial parameters and see if I can fit LPL and HPL at the same time(this makes PLOGS work) commit 31fb75facfc140adbeaa2fe8ddcbdfaa2dc4e48e Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Mar 11 16:29:19 2021 -0600 Bug Fixes commit 9bbdb0689855d38af559d625258782d2a1cff4e1 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 23:24:09 2021 -0600 Update mech_fcns.py Accidentally removed sleep for mech changing. This must remain until incident shock reactor is rewritten commit 4a6efa520b4048d2fc22ca6733d6da97ee279de1 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 23:16:01 2021 -0600 Update mech_optimize.py Automatically set minimum time between plots when optimizing commit c177ed6e58ff468a0bc2e75cd532ce8e8d245601 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 22:46:13 2021 -0600 Plotting Improved Set plots to only draw if they are being shown Set minimum time since last draw for optimization End result: Much faster plotting and program does not appear to hang like before commit 397457dbf477f76da4c967f2fa5b9f23a2cc7a61 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Mar 9 00:16:23 2021 -0600 Calculated Troe Derivatives commit 43a47e944335b0b9316afa63f6f208dd044a4611 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Mar 6 13:47:07 2021 -0600 Bug Fixes commit f0899df6ca3b020e073b843a3eff498f700a6a4e Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 28 20:27:18 2021 -0600 Fixed Branch Errors commit 687737fe016dd65faacf9d96bbc26df80bfc5121 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 28 18:28:40 2021 -0600 Troe Tinkering * Bug Fixing Fixing issues with optimization * Squashed commit of the following: commit 7c78008561899297477be22b2d6789d8d9be2619 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jan 17 14:51:38 2022 -0600 Update main.py commit 9aee5641e96c5c94c7cf3398e1c8f6c6c113c738 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 13:02:15 2021 -0600 Version check bug fix commit 703d2253328b5f1e59a85e04cb57552a1d0a8c53 Merge: 0634337 7b12b8e Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 12:55:21 2021 -0600 Merge branch 'Troe_opt' of https://github.com/tsikes/Frhodo into Troe_opt commit 063433798bdf084fd79bb37c79b8783ace8375d3 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 12:55:18 2021 -0600 Moving Loss Partition Function TCK commit 7b12b8eceaa5f742e3453bd75d8caa3e69b8c703 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 8 11:47:03 2021 -0600 Working on Bug Fix New bug with RBFopt when using as executable. Working on fixing. (command window is flashing each iteration) commit 681051d2afc94aeba90101e686bab51d199d5d32 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Thu Nov 4 16:20:59 2021 -0500 Bug Fix Fixed CheKiPEUQ interface issues commit 56645923f1229a17d3f6ac50b4cdce3ebbb91580 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Nov 4 11:01:42 2021 -0500 Bug Fix Fixed bugs with secondary y axis in sim explorer. commit 78215871dec25a997e24ab2f4055356f78d6a3f7 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Wed Nov 3 19:46:04 2021 -0500 Update fit_fcn.py commit f9d11e87d980b7a428c61a5863ae02d5e84d98d9 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Wed Nov 3 18:09:27 2021 -0500 Optimization Bug Fixing Fixed setting rate parametrization constant uncertainties to zero for residual based method. commit 7bd4d647cb0e54afa40e3703bd490162f866e189 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Nov 2 15:52:28 2021 -0500 Minor changes Consolidated bisymlog scaling factor Added new % abs density gradient to options commit cf6328b0e9038f68a3a34f51a2d628d95dcd2912 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Nov 1 16:22:59 2021 -0500 Bug Fixing Fixing bugs in explorer widget/base plot with limit setting and widget type not defined commit 15b2acb593cdd15b91c80744ee109665499b5c7f Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Oct 4 21:56:17 2021 -0500 Minor Update Fixed Torr added bisymlog to opt type Fixed plot error where left limit == right limit commit 943ad8e2950b9c0267a5af2176ce846a243b8aca Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Sep 23 16:36:40 2021 -0500 Update for New Tranter Exp Format New format does not have tOpt/PT spacing but instead gives velocity commit 0eb48ce2e16c69cf8387aadeb94f5a0af17bcc68 Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Sun Sep 19 22:55:06 2021 -0500 Opt Update Moved bisymlog out of multiple locations into convert_units Changed calculate residuals log to bisymlog commit 5268062afa75b854fe0eb5e3e25cebd0e46b6b58 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Sep 16 15:35:24 2021 -0500 Changed Fit Function Changed fit function to mean instead of median Changed so that Bayesian doesn't include penalty function at end. Need to check commit 3df584d3ea1d0896d96aa4bcda3dc8072f4fd1c0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 22:15:05 2021 -0500 Update fit_fcn.py commit a507f938afdf23a5210739eab9eccd3a33b02dee Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 19:30:38 2021 -0500 Changed adaptive loss function Adaptive loss function now optimizes inside of rate optimization loop. It's much more efficient. It also means each individual experiment has it's own loss alpha commit bc66027fd2576fbfc3c403be1b6b3cd404d3d842 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 31 16:58:18 2021 -0500 Fixed usage of C in loss function/GUI commit 42d82ff555bb864bb430a756691a41597473fe9a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 18:26:13 2021 -0500 Modified loss function Removed prior scaling from loss function to bring it back to publication formula commit d0d43dc453e170d8f7e6b6d57ff63877f17aa27f Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 16:57:00 2021 -0500 Tinkering with generalized loss func commit 4702ef3d70e4aacedae1ce39b738bc88c9756acb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 29 15:51:28 2021 -0500 Changed to adaptive loss function The shape value is now broken into an optimized parameter for inside experiments and between experiments commit 60e782f43ddc6f473a3b7773167bdb46aaa950e2 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 23:54:22 2021 -0500 Update loss_integral.py commit d093dda5fe90a34d197a9842dcf2a63c8e8b7433 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 22:45:04 2021 -0500 loss integral fitting commit 774f70d4ae07539a1153fda94e827beb2e8b1199 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 17:41:35 2021 -0500 loss function changes commit cba7b9e5638efa58970aa93506218514e170c410 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 23 13:30:17 2021 -0500 Experiment import update Updated experimental conditions import to work for old Tranter style experiment files commit 016ea7ca63c4af2155d1964c1d01d982dfc308bb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 20 16:31:40 2021 -0500 Create fit_coeffs_pygmo.py commit b95d5a69e6c8784f2182f9d041dd55dd7aa24786 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 20 16:29:08 2021 -0500 Rollback Rolling back to working CRS2 Troe opt commit 3c577e96ed87fb79a47555acf06d3ebdb65ab541 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 19 14:59:26 2021 -0500 Testing Troe Opt commit 553887b648b4f9e6ff910abdbeea81ddfe44cfbb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 19 01:36:54 2021 -0500 Update fit_coeffs.py commit cbfaa20b587b940dc3620f599a3facb7dd1990f7 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Aug 18 16:28:28 2021 -0500 Working on implementing Augmented Lagrangian commit 520f822e5f61808140d047a8b9f5049ae9e667cc Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Aug 18 12:44:48 2021 -0500 Small changes bonmin path wasn't working with network path as string reduced min_T_range changing Troe eqn to be continuous for all Fcent values and other cases commit ab97a6def6af9a57731dcf88c5fefc88d175d1ce Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 22:52:25 2021 -0500 Moving ipopt and bonmin commit 3accc3b72cc0e69b5944dda58695470e87013277 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 22:41:46 2021 -0500 Including bonmin and ipopt for rbfopt commit 3289e0ddd4a18f60d3c5dc7be21879e0cd206fc5 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Aug 16 21:12:21 2021 -0500 Working on Pygmo and RBFOpt commit dc91f303cffb52aacf36acf123bfcaee4e0ddd15 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 15 21:55:11 2021 -0500 Troe updates Implemented genetic algorithms into GUI Sped GA's up through Numba Sped GA's up by only using DIRECT_L instead of CRS2. This is less accurate but much faster commit 91cbd0f243c0d31751ce37c868cc2d93d7ce9e8a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 15 01:08:16 2021 -0500 Implementing genetic algorithms commit 30ad17b43445e0b0d2f4dd6d616eb7a53487370b Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Aug 12 08:56:46 2021 -0500 Minimally working Troe commit 82ee3a198fd4d153f7877887aa710af4b7617714 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 23:57:46 2021 -0500 Update fit_coeffs.py Minor bug catch commit 0d89b688dad5c8289de4e803c9664dc358f6eccb Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 23:26:30 2021 -0500 Maybe working? commit 5e69b6974732ce2ed7262436da72ee026dfb8c7f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 8 21:44:16 2021 -0500 Update fit_coeffs.py Almost ready to test, but switching to optimizing arrhenius parameters instead of lpl, hpl rates commit c3a5098c2c405c8a62a291302014116ceb595e83 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Fri Aug 6 09:54:08 2021 -0500 Tinkering with ranges and bounds commit 44d7ddb38b1706c54f68cd829866e9f83df8173c Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Aug 3 14:55:30 2021 -0500 Troe opt changes Troe opt changes. fixed bug from updating dependencies commit 3735e38c33333e09b81d9950e5ccb452dfec82db Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Aug 1 20:44:04 2021 -0500 Redoing Troe Fit Redoing fit. Need to do more work on constraints of fitting LPL, HPL, Fcent commit 64070580655a1df81ee7c6cd7f1d38aa7d0a49ff Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jul 29 10:20:16 2021 -0500 Troe Fitting - Nonfunctional Redoing Troe Fitting to be similar to PLOG -> Troe. commit d5731fc23e48ad7f74b468ae9637c68abb71b725 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jul 6 15:28:43 2021 -0500 Resample Nonuniform Data Resamples nonuniform data for uncertainty shading smoothing commit 4e113be44a96730586f92ee1b5b3f17ad3cae9f0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jul 5 18:52:25 2021 -0500 Update options_panel_widgets.py Enable/Disable wavelet levels input box accordingly commit cf67025f6228c6b8e5945401028aac9db2d6ee39 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jul 5 18:42:40 2021 -0500 Implemented unc shading over data commit 2fa08582d44fbc97fd01e2657664cd0eeddcdc1a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Fri Jul 2 13:59:59 2021 -0500 Minor update Added error checking for 0D reactors to prevent crashing commit 5abded10b0b27beac16f61728fa375fe792e0ce8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Jun 30 20:25:17 2021 -0500 Bug Fix Fixed some convergence issues related to outlier determination commit 9e268a8ac9594a7d8dae86a089365f8ee5453618 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 29 21:29:42 2021 -0500 Working Troe Optimization Refactored Troe fitting into more legible classes. Enabled multiprocessing Beginning Testing commit 951efd3b251ea1d40abc1dc3aecc542620b977e8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Jun 26 23:03:43 2021 -0500 Working Basic optimization is working commit e672ca027fd5303045f8efeda6858d81207803a7 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Jun 26 00:12:41 2021 -0500 Sort of working Have nlopt working for fcent commit 2dbb375cb1a3ac4c5ec989635400fbf2fc8afa96 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 24 17:19:18 2021 -0500 Update fit_coeffs.py Added constraints, but not fitting well with nlopt commit d9302247552b820f03303a0632ad7bbb1f2ae5a4 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 24 10:33:31 2021 -0500 Semi working Troe is working, but need to implement constraints on Fcent fitting. Should be working for Plog -> Troe after implementing those constraints commit afdcdbc5955798bf68f900ffe432fe2825c8e131 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 22 15:26:13 2021 -0500 Reproducing Troe Ok commit 9d080529713b8ac53c519379b79804c0e8bb3ba6 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jun 21 22:49:49 2021 -0500 Update fit_coeffs.py commit 50265c672547ef8d611abca3654020efc08b048d Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jun 21 14:27:55 2021 -0500 Tinkering commit ee6501edf5675db221420e98b15f0aa3242e1dd2 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Jun 20 21:05:57 2021 -0500 New Plog Fit Method commit 3f5619e669e0485c5b06861fb82797bb754aaf82 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Jun 17 13:39:16 2021 -0500 Nothing works commit 568ea11ecd5be87afa3c573305fecae487c4b3af Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Jun 8 17:27:22 2021 -0500 PLOG working? commit e9d957f823be4b4129c56ccbc11db135a0b20a85 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon May 3 22:26:22 2021 -0500 Works for PLOG but badly fitting commit 9fb5d64a3e22beec5b54e5f8b42a2bacad288b1f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon May 3 22:06:08 2021 -0500 PLOG Residual Working commit 292d68fb38e29211b22716a567025f7e7a9657c0 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Apr 11 22:13:03 2021 -0500 Troe Progress commit 6867bf67882ed4a45ce652debc49910536d4122d Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Apr 7 18:22:37 2021 -0500 Update fit_coeffs.py commit 36b155d038fddf8bb32da3fefc35e8840134810a Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Apr 7 15:07:54 2021 -0500 More Progress commit e5dec8fae959c28662a911902f301c206f8e6366 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Apr 6 22:12:52 2021 -0500 Making Progress commit 0c5052034219462ab6b8bb83a5217dc3332bacdb Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Apr 6 15:01:09 2021 -0500 Tinkering commit 47bc21dcbc99288242ca3b2ffdddd541e668024a Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Apr 5 18:34:20 2021 -0500 Working commit add6d9dd5681a0c390958a2b85e2b49b11d39570 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Apr 5 13:48:45 2021 -0500 Big Changes Refactored calculation-type functions Set mechanism now generates mechanism programmatically rather than from yaml text in memory. This is necessary to be able to switch reaction types commit 70cab313f211c86434cb79b710f90003b63b51a2 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 31 16:56:05 2021 -0500 set_mechanism changes Working on changing reaction types. Side benefit will be faster initialization during optimization commit f6822d93a11368ced34ea47543ccf43f74fb6422 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 31 13:34:46 2021 -0500 Update misc_fcns.py commit cef069ba46c9575a21dd39506aa9540a40bc3892 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 29 14:49:16 2021 -0500 Update shock_fcns.py commit b3e91ea58b024aa40807cc08d024f33415f10f2f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 29 00:00:18 2021 -0500 Update shock_fcns.py commit b8744a1ad214bb90bb619a08a289e9f4cc833e10 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Mar 28 23:43:34 2021 -0500 Update shock_fcns.py Updated shock solver to match paper commit 198eabf72084b04569be942e3d4d88521eba6c21 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Mar 22 00:03:21 2021 -0500 Troe Falloff functional Fitting the falloff parameters is working. Need to think more about initial parameters and see if I can fit LPL and HPL at the same time(this makes PLOGS work) commit 31fb75facfc140adbeaa2fe8ddcbdfaa2dc4e48e Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Mar 11 16:29:19 2021 -0600 Bug Fixes commit 9bbdb0689855d38af559d625258782d2a1cff4e1 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 23:24:09 2021 -0600 Update mech_fcns.py Accidentally removed sleep for mech changing. This must remain until incident shock reactor is rewritten commit 4a6efa520b4048d2fc22ca6733d6da97ee279de1 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 23:16:01 2021 -0600 Update mech_optimize.py Automatically set minimum time between plots when optimizing commit c177ed6e58ff468a0bc2e75cd532ce8e8d245601 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Mar 10 22:46:13 2021 -0600 Plotting Improved Set plots to only draw if they are being shown Set minimum time since last draw for optimization End result: Much faster plotting and program does not appear to hang like before commit 397457dbf477f76da4c967f2fa5b9f23a2cc7a61 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Mar 9 00:16:23 2021 -0600 Calculated Troe Derivatives commit 43a47e944335b0b9316afa63f6f208dd044a4611 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Mar 6 13:47:07 2021 -0600 Bug Fixes commit f0899df6ca3b020e073b843a3eff498f700a6a4e Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 28 20:27:18 2021 -0600 Fixed Branch Errors commit 687737fe016dd65faacf9d96bbc26df80bfc5121 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 28 18:28:40 2021 -0600 Troe Tinkering commit de2f6d6c72afa2825a718dde98794ee93c3c61c8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 24 21:06:09 2021 -0600 SRI progress commit 55c92eaabb86acc4efad28c89e7f8872e6946413 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 24 17:07:25 2021 -0600 Update base_plot.py commit a99ae6949523491a828c091c95b7a6d43381a4c0 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 24 12:11:30 2021 -0600 Update base_plot.py commit edbc062de10439f61bb82980e5a21e6963811feb Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 23 17:24:00 2021 -0600 Not going well commit 738a410f596821b8539a911c963ff33bc3312750 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Feb 22 22:08:43 2021 -0600 New SRI fit Fitting a,b then all commit e45563872533d08db378c581b2d5bf58b87bd285 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 21 21:02:36 2021 -0600 SRI - not much progress commit 5889aec780ca4e1cbce370d6943874f6277d9e69 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 21 01:05:46 2021 -0600 More SRI fun commit fb0380db74bcc20d98712b15386c9675a55848ee Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 21 00:27:41 2021 -0600 Working on SRI commit 665ac25ef0fd65db042893e82585c8086014fbb3 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Thu Feb 18 18:07:54 2021 -0600 Working on nlopt SRI commit a2593ea4e7e2fe31f0c56c24e223c72182d59cef Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 17 17:49:32 2021 -0600 More SRI Fitting commit f901332049c809c0b2415ca9023d6e31d88c9789 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 16 22:36:23 2021 -0600 SRI fitting commit f3d33a6f4cde8c68c1b3292fdf79363b35e4bc79 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 16 17:45:57 2021 -0600 SRI fitting progress commit 3b46b21395a556b2965233961ac080aab8189ae8 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 16 10:00:37 2021 -0600 Working on Falloff Fits commit 5dab2bff3623e5bedd115384ee0e9c7ec61ff76c Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Feb 15 22:38:09 2021 -0600 Working on falloff Need to change mech to SRI Need to save mech as yaml for reverting back to later commit b67d2b1dd4cfe59943680522e005d8c1579060ab Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 14 22:11:16 2021 -0600 Working on SRI Fit commit 0aac366aada55b2f552192605e285802bcc6b805 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sun Feb 14 15:47:26 2021 -0600 Troe kinda works Troe kinda works but it's very slow to fit the coefficients. I'm going to switch to SRI and hope for a faster convergence time to fitting the coefficients commit e40d5860ca5495f677aeb4c64117b52a49355508 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Sat Feb 13 00:33:33 2021 -0600 Troe kind of working commit bbbed46a06bf6770a7cf73e8abe2880a8f731647 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Thu Feb 11 16:12:14 2021 -0600 Arrhenius Optimization working again Arrhenius working Work to be done with Falloff commit 308b5b37bbe290275fe6334411131090d1f8a96d Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Wed Feb 10 17:03:38 2021 -0600 Backend mech work Changing arrhenius coeffs/coef_bnds to be an item in a list to better match plog and falloff rates commit f1b2a0c0b3d70546bd361982e0af7bd41bf1274f Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 9 22:05:26 2021 -0600 More back end work on falloff/plogs commit c1c41c3b8698f6bbc1e0237765e3db84e341f465 Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 9 17:20:14 2021 -0600 Bug fix and mech.reset work Fixed mechanism double loading from setting use_thermo_file_box programmatically mech.reset now includes Plog and Falloff properly. commit 3b89e99654216a6dcb5eaac8a62d284c7fbe406f Author: TSikes <50559900+tsikes@users.noreply.github.com> Date: Tue Feb 9 16:29:06 2021 -0600 Backend mech structures created Arrhenius optimization functioning again commit a9c0af4e5e5a064af03a9a08461b090c88ec7db5 Author: tsikes <50559900+tsikes@users.noreply.github.com> Date: Mon Feb 8 21:18:39 2021 -0600 Nothing Changing Co-authored-by: Aditya Savara <39929571+AdityaSavara@users.noreply.github.com> commit c54f4c1201424387e194a080a633bf05a5ab219c Author: Travis Sikes <50559900+tsikes@users.noreply.github.com> Date: Mon Jan 17 17:07:47 2022 -0600 Update from tsikes/Frhodo (#3) * Soln2ck Fix Fixed bug in writing thermo if no note exists (like writing from a 'gri30.cti') * Heat Release Rate Added heat release rate as both an observable and in the sim explorer * Example Directory Changed and FirstTimeUse.docx made. * Update FirstTimeUse.docx * removing opt mech files * Update fit_fcn.py Replace all for following: change calculate_residuals to calculate_objective_function change calc_resid_output to calc_objective_function_output * Update fit_fcn.py * Update fit_fcn.py changing variable names in fit_fcn and separating the bayesian case into an if statement. * Update fit_fcn.py * Update fit_fcn.py * Update fit_fcn.py Making single return for verbose versus normal case. * Made CheKiPEUQ_from_Frhodo helper module and also put code into fit_fcn Right now, these codes are not working. It is just the beginning. * Update fit_fcn.py * Create ExampleConfig.ini * add comments to fit_fcn.py about what is fed in by "args_list" * Update fit_fcn.py * renaming obs to obs_sim in fit_fcn * Update fit_fcn.py * get_varying_rate_vals_and_bnds * get_varying_rate_vals_and_bnds * pars_uncertainty_distribution code * newOptimization field for creating PE_object. * Added "Force Bayesian" * minor changes * Update CheKiPEUQ_from_Frhodo.py * Update CheKiPEUQ_from_Frhodo.py * Adjusting observed_data and responses shapes for CheKiPEUQ * got the weightings multiplication to work after transposing and transposing back * typo correction * update get_last_obs_sim_interp * moved CheKiPEUQ_PE_object creation into time_adjust_func * Update fit_fcn.py * working on get_log_posterior_density and getting varying_rate_vals * Update fit_fcn.py * forcing 'final' after minimization to be 'residual' * switch to negative log P for objective_function_value * Trying to allow Bayesian way to get past the "QQ" error by feeding residual metrics. * trying 10**neg_logP in lieu of loss_scalar. * Setting forceBayesian * adding Bayesian_dict to make code easier to follow * Separating Bayesian into 5 steps to simplify things for Travis * Update fit_fcn.py * Update CheKiPEUQ_integration_notes.txt… --------- Co-authored-by: AdityaSavara <39929571+AdityaSavara@users.noreply.github.com> --- .vscode/settings.json | 7 + frhodo_environment.yml | 24 - frhodo_environment_pinned.yml | 23 + frhodo_environment_pinned_all.yml | 177 ++ frhodo_environment_unpinned.yml | 24 + src/calculate/convert_units.py | 55 +- src/calculate/mech_fcns.py | 247 ++- src/calculate/optimize/adaptive_loss.py | 267 +++ src/calculate/optimize/adaptive_loss_tck.py | 180 ++ src/calculate/optimize/fit_coeffs.py | 38 +- src/calculate/optimize/fit_coeffs_pygmo.py | 17 +- src/calculate/optimize/fit_fcn.py | 664 ++++-- src/calculate/optimize/mech_optimize.py | 31 +- src/calculate/optimize/misc_fcns.py | 139 +- src/calculate/optimize/optimize_worker.py | 6 +- src/calculate/reactors.py | 726 +++--- src/calculate/shock_fcns.py | 4 +- src/ck2yaml.py | 2219 ------------------- src/main.py | 386 ++-- src/mech_widget.py | 56 +- src/misc_widget.py | 302 ++- src/options_panel_widgets.py | 1647 ++++++++------ src/plot/base_plot.py | 8 +- src/plot/signal_plot.py | 4 +- src/save_output.py | 5 +- src/settings.py | 4 +- src/soln2ck.py | 654 ------ 27 files changed, 3309 insertions(+), 4605 deletions(-) create mode 100644 .vscode/settings.json delete mode 100644 frhodo_environment.yml create mode 100644 frhodo_environment_pinned.yml create mode 100644 frhodo_environment_pinned_all.yml create mode 100644 frhodo_environment_unpinned.yml create mode 100644 src/calculate/optimize/adaptive_loss.py create mode 100644 src/calculate/optimize/adaptive_loss_tck.py delete mode 100644 src/ck2yaml.py delete mode 100644 src/soln2ck.py diff --git a/.vscode/settings.json b/.vscode/settings.json new file mode 100644 index 0000000..b4075aa --- /dev/null +++ b/.vscode/settings.json @@ -0,0 +1,7 @@ +{ + "jupyter.jupyterServerType": "local", + "[python]": { + "editor.defaultFormatter": "ms-python.black-formatter" + }, + "python.formatting.provider": "none" +} \ No newline at end of file diff --git a/frhodo_environment.yml b/frhodo_environment.yml deleted file mode 100644 index 7462719..0000000 --- a/frhodo_environment.yml +++ /dev/null @@ -1,24 +0,0 @@ -name: frhodo -channels: - - numba - - cantera - - conda-forge - - defaults -dependencies: - - cantera=2.5.1 - - ipopt=3.14.9 - - matplotlib=3.5.1 - - nlopt=2.7.1 - - numba=0.56.3 - - numpy=1.23.3 - - pip - - pygmo=2.18.0 - - pyqt=5.12.3 - - qt=5.12.9 - - qtpy=1.11.2 - - requests=2.28.1 - - scipy=1.9.1 - - tabulate=0.9.0 - - pip: - - dtcwt==0.12.0 - - rbfopt==4.2.4 diff --git a/frhodo_environment_pinned.yml b/frhodo_environment_pinned.yml new file mode 100644 index 0000000..1ff341a --- /dev/null +++ b/frhodo_environment_pinned.yml @@ -0,0 +1,23 @@ +name: frhodo +channels: + - numba + - cantera + - conda-forge + - defaults +dependencies: + - cantera=3.0.0 + - ipopt=3.14.12 + - matplotlib=3.7.2 + - nlopt=2.7.1 + - numba=0.57.1 + - numpy=1.24.4 + - pygmo=2.19.5 + - pyqt=5.15.9 + - qt=5.15.8 + - qtpy=2.4.0 + - requests=2.31.0 + - scipy=1.11.2 + - tabulate=0.9.0 + - pip: + - dtcwt==0.12.0 + - rbfopt==4.2.6 diff --git a/frhodo_environment_pinned_all.yml b/frhodo_environment_pinned_all.yml new file mode 100644 index 0000000..137c3fc --- /dev/null +++ b/frhodo_environment_pinned_all.yml @@ -0,0 +1,177 @@ +name: frhodo +channels: + - numba + - cantera + - conda-forge + - defaults +dependencies: + - asttokens=2.4.0=pyhd8ed1ab_0 + - backcall=0.2.0=pyh9f0ad1d_0 + - backports=1.0=pyhd8ed1ab_3 + - backports.functools_lru_cache=1.6.5=pyhd8ed1ab_0 + - boost-cpp=1.78.0=h9f4b32c_4 + - brotli=1.1.0=hcfcfb64_0 + - brotli-bin=1.1.0=hcfcfb64_0 + - brotli-python=1.1.0=py311h12c1d0e_0 + - bzip2=1.0.8=h8ffe710_4 + - ca-certificates=2023.7.22=h56e8100_0 + - cantera=3.0.0=py311h859585b_1 + - certifi=2023.7.22=pyhd8ed1ab_0 + - charset-normalizer=3.2.0=pyhd8ed1ab_0 + - cloudpickle=2.2.1=pyhd8ed1ab_0 + - colorama=0.4.6=pyhd8ed1ab_0 + - comm=0.1.4=pyhd8ed1ab_0 + - contourpy=1.1.1=py311h005e61a_0 + - cycler=0.11.0=pyhd8ed1ab_0 + - debugpy=1.8.0=py311h12c1d0e_0 + - decorator=5.1.1=pyhd8ed1ab_0 + - entrypoints=0.4=pyhd8ed1ab_0 + - exceptiongroup=1.1.3=pyhd8ed1ab_0 + - executing=1.2.0=pyhd8ed1ab_0 + - fonttools=4.42.1=py311ha68e1ae_0 + - freetype=2.12.1=hdaf720e_2 + - gettext=0.21.1=h5728263_0 + - glib=2.78.0=h12be248_0 + - glib-tools=2.78.0=h12be248_0 + - gst-plugins-base=1.22.5=h001b923_1 + - gstreamer=1.22.5=hb4038d2_1 + - hdf5=1.12.1=nompi_h57737ce_104 + - icu=72.1=h63175ca_0 + - idna=3.4=pyhd8ed1ab_0 + - importlib-metadata=6.8.0=pyha770c72_0 + - importlib_metadata=6.8.0=hd8ed1ab_0 + - intel-openmp=2023.2.0=h57928b3_49496 + - ipopt=3.14.12=ha9547d1_1 + - ipykernel=6.25.2=pyh60829e3_0 + - ipyparallel=8.6.1=pyhd8ed1ab_0 + - ipython=8.15.0=pyh5737063_0 + - jedi=0.19.0=pyhd8ed1ab_0 + - jupyter_client=8.3.1=pyhd8ed1ab_0 + - jupyter_core=5.3.1=py311h1ea47a8_0 + - kiwisolver=1.4.5=py311h005e61a_0 + - krb5=1.20.1=heb0366b_0 + - lcms2=2.15=he9d350c_2 + - lerc=4.0.0=h63175ca_0 + - libblas=3.9.0=18_win64_mkl + - libbrotlicommon=1.1.0=hcfcfb64_0 + - libbrotlidec=1.1.0=hcfcfb64_0 + - libbrotlienc=1.1.0=hcfcfb64_0 + - libcantera=3.0.0=h82bb817_1 + - libcblas=3.9.0=18_win64_mkl + - libclang=16.0.3=default_h8b4101f_1 + - libclang13=16.0.3=default_h45d3cf4_1 + - libcurl=8.1.2=h68f0423_0 + - libdeflate=1.19=hcfcfb64_0 + - libexpat=2.5.0=h63175ca_1 + - libffi=3.4.2=h8ffe710_5 + - libflang=5.0.0=h6538335_20180525 + - libglib=2.78.0=he8f3873_0 + - libhwloc=2.9.1=h51c2c0f_0 + - libiconv=1.17=h8ffe710_0 + - libjpeg-turbo=2.1.5.1=hcfcfb64_1 + - liblapack=3.9.0=18_win64_mkl + - libogg=1.3.4=h8ffe710_1 + - libpng=1.6.39=h19919ed_0 + - libsodium=1.0.18=h8d14728_1 + - libsqlite=3.43.0=hcfcfb64_0 + - libssh2=1.11.0=h7dfc565_0 + - libtiff=4.6.0=h4554b19_1 + - libvorbis=1.3.7=h0e60522_0 + - libwebp=1.3.2=hcfcfb64_0 + - libwebp-base=1.3.2=hcfcfb64_0 + - libxcb=1.15=hcd874cb_0 + - libxml2=2.10.4=hc3477c8_0 + - libzlib=1.2.13=hcfcfb64_5 + - llvm-meta=5.0.0=0 + - llvmlite=0.40.1=py311_0 + - m2w64-gcc-libgfortran=5.3.0=6 + - m2w64-gcc-libs=5.3.0=7 + - m2w64-gcc-libs-core=5.3.0=7 + - m2w64-gmp=6.1.0=2 + - m2w64-libwinpthread-git=5.0.0.4634.697f757=2 + - matplotlib=3.7.2=py311h1ea47a8_0 + - matplotlib-base=3.7.2=py311h6e989c2_0 + - matplotlib-inline=0.1.6=pyhd8ed1ab_0 + - metis=5.1.0=h63175ca_1007 + - mkl=2022.1.0=h6a75c08_874 + - msys2-conda-epoch=20160418=1 + - mumps-seq=5.2.1=hb3f9cae_11 + - munkres=1.1.4=pyh9f0ad1d_0 + - nest-asyncio=1.5.6=pyhd8ed1ab_0 + - networkx=3.1=pyhd8ed1ab_0 + - nlopt=2.7.1=py311h6e294a0_3 + - numba=0.57.1=np1.22py3.11h81563b4_g04e81073b_0 + - numpy=1.24.4=py311h0b4df5a_0 + - openjpeg=2.5.0=h3d672ee_3 + - openmp=5.0.0=vc14_1 + - openssl=3.1.2=hcfcfb64_0 + - packaging=23.1=pyhd8ed1ab_0 + - pagmo=2.19.0=h17fe9aa_2 + - parso=0.8.3=pyhd8ed1ab_0 + - pcre2=10.40=h17e33f8_0 + - pickleshare=0.7.5=py_1003 + - pillow=10.0.1=py311hd926f49_0 + - pip=23.2.1=pyhd8ed1ab_0 + - platformdirs=3.10.0=pyhd8ed1ab_0 + - ply=3.11=py_1 + - prompt-toolkit=3.0.39=pyha770c72_0 + - prompt_toolkit=3.0.39=hd8ed1ab_0 + - psutil=5.9.5=py311ha68e1ae_0 + - pthread-stubs=0.4=hcd874cb_1001 + - pthreads-win32=2.9.1=hfa6e2cd_3 + - pure_eval=0.2.2=pyhd8ed1ab_0 + - pybind11-abi=4=hd8ed1ab_3 + - pygments=2.16.1=pyhd8ed1ab_0 + - pygmo=2.19.5=py311h153afbf_1 + - pyparsing=3.0.9=pyhd8ed1ab_0 + - pyqt=5.15.9=py311h125bc19_4 + - pyqt5-sip=12.12.2=py311h12c1d0e_4 + - pysocks=1.7.1=pyh0701188_6 + - python=3.11.5=h2628c8c_0_cpython + - python-dateutil=2.8.2=pyhd8ed1ab_0 + - python_abi=3.11=3_cp311 + - pywin32=304=py311h12c1d0e_2 + - pyzmq=25.1.1=py311h7b3f143_0 + - qt=5.15.8=h91493d7_0 + - qt-main=5.15.8=h2c8576c_11 + - qt-webengine=5.15.8=h5b1ea0b_0 + - qtpy=2.4.0=pyhd8ed1ab_0 + - requests=2.31.0=pyhd8ed1ab_0 + - ruamel.yaml=0.17.32=py311ha68e1ae_0 + - ruamel.yaml.clib=0.2.7=py311ha68e1ae_1 + - scipy=1.11.2=py311h37ff6ca_1 + - setuptools=68.2.2=pyhd8ed1ab_0 + - sip=6.7.11=py311h12c1d0e_0 + - six=1.16.0=pyh6c4a22f_0 + - stack_data=0.6.2=pyhd8ed1ab_0 + - tabulate=0.9.0=pyhd8ed1ab_1 + - tbb=2021.9.0=h91493d7_0 + - tk=8.6.12=h8ffe710_0 + - toml=0.10.2=pyhd8ed1ab_0 + - tomli=2.0.1=pyhd8ed1ab_0 + - tornado=6.3.3=py311ha68e1ae_0 + - tqdm=4.66.1=pyhd8ed1ab_0 + - traitlets=5.10.0=pyhd8ed1ab_0 + - typing-extensions=4.7.1=hd8ed1ab_0 + - typing_extensions=4.7.1=pyha770c72_0 + - tzdata=2023c=h71feb2d_0 + - ucrt=10.0.22621.0=h57928b3_0 + - urllib3=2.0.4=pyhd8ed1ab_0 + - vc=14.3=h64f974e_17 + - vc14_runtime=14.36.32532=hdcecf7f_17 + - vs2015_runtime=14.36.32532=h05e6639_17 + - wcwidth=0.2.6=pyhd8ed1ab_0 + - wheel=0.41.2=pyhd8ed1ab_0 + - win_inet_pton=1.1.0=pyhd8ed1ab_6 + - xorg-libxau=1.0.11=hcd874cb_0 + - xorg-libxdmcp=1.1.3=hcd874cb_0 + - xz=5.2.6=h8d14728_0 + - zeromq=4.3.4=h0e60522_1 + - zipp=3.16.2=pyhd8ed1ab_0 + - zlib=1.2.13=hcfcfb64_5 + - zstd=1.5.5=h12be248_0 + - pip: + - dtcwt==0.12.0 + - pyomo==6.6.2 + - rbfopt==4.2.6 +prefix: C:\Users\Travis\miniconda3\envs\frhodo diff --git a/frhodo_environment_unpinned.yml b/frhodo_environment_unpinned.yml new file mode 100644 index 0000000..0dc174a --- /dev/null +++ b/frhodo_environment_unpinned.yml @@ -0,0 +1,24 @@ +name: frhodo +channels: + - cantera + - numba + - conda-forge + - defaults +dependencies: + - cantera + - ipopt + - matplotlib + - nlopt + - numba=0.57 + - numpy + - pip + - pygmo + - pyqt + - qt + - qtpy + - requests + - scipy + - tabulate + - pip: + - dtcwt + - rbfopt diff --git a/src/calculate/convert_units.py b/src/calculate/convert_units.py index 3c5a2b0..9a558b1 100644 --- a/src/calculate/convert_units.py +++ b/src/calculate/convert_units.py @@ -2,20 +2,71 @@ # and licensed under BSD-3-Clause. See License.txt in the top-level # directory for license and copyright information. +import sys import numpy as np import cantera as ct -import sys +import numba + conv2ct = {'torr': 101325/760, 'kPa': 1E3, 'atm': 101325, 'bar': 100000, 'psi': 4.4482216152605/0.00064516, 'cm/s': 1E-2, 'mm/μs': 1000, 'ft/s': 1/3.28084, 'in/s': 1/39.37007874, 'mph': 1609.344/60**2, 'kcal': 1/4184, 'cal': 1/4.184} + +@numba.jit(nopython=True, cache=True) +def OoM_numba(x, method="round"): + """ + This function calculates the order of magnitude (OoM) of each element in the input array 'x' using the specified method. + + Parameters: + x (numpy array): The input array for which the OoM is to be calculated. + method (str): The method to be used for calculating the OoM. It can be one of the following: + "round" - round to the nearest integer (default) + "floor" - round down to the nearest integer + "ceil" - round up to the nearest integer + "exact" - return the exact OoM without rounding + + Returns: + x_OoM (numpy array): The array of the same shape as 'x' containing the OoM of each element in 'x'. + """ + + x_OoM = np.empty_like(x) + for i, xi in enumerate(x): + if xi == 0.0: + x_OoM[i] = 1.0 + + elif method.lower() == "floor": + x_OoM[i] = np.floor(np.log10(np.abs(xi))) + + elif method.lower() == "ceil": + x_OoM[i] = np.ceil(np.log10(np.abs(xi))) + + elif method.lower() == "round": + x_OoM[i] = np.round(np.log10(np.abs(xi))) + + else: # "exact" + x_OoM[i] = np.log10(np.abs(xi)) + + return x_OoM + + def OoM(x): + is_array = True + if any([isinstance(x, _type) for _type in [int, float]]): + is_array = False + x = np.array([x]) + if not isinstance(x, np.ndarray): x = np.array(x) + x[x==0] = 1 # if zero, make OoM 0 - return np.floor(np.log10(np.abs(x))) + + if is_array: + return OoM_numba(x, method="floor") + else: + return OoM_numba(x, method="floor")[0] + def RoundToSigFigs(x, p): x = np.asarray(x) diff --git a/src/calculate/mech_fcns.py b/src/calculate/mech_fcns.py index 65e731b..3ad8c6e 100644 --- a/src/calculate/mech_fcns.py +++ b/src/calculate/mech_fcns.py @@ -5,13 +5,15 @@ import os, io, stat, contextlib, pathlib, time from copy import deepcopy import cantera as ct -from cantera import interrupts, cti2yaml#, ck2yaml, ctml2yaml +from cantera import cti2yaml, ck2yaml import numpy as np from calculate import reactors, shock_fcns, integrate -import ck2yaml from timeit import default_timer as timer +arrhenius_coefNames = ['activation_energy', 'pre_exponential_factor', 'temperature_exponent'] + + class Chemical_Mechanism: def __init__(self): self.isLoaded = False @@ -20,17 +22,17 @@ def __init__(self): def load_mechanism(self, path, silent=False): def chemkin2cantera(path): if path['thermo'] is not None: - surfaces = ck2yaml.convert_mech(path['mech'], thermo_file=path['thermo'], transport_file=None, surface_file=None, + gas = ck2yaml.convert_mech(path['mech'], thermo_file=path['thermo'], transport_file=None, surface_file=None, phase_name='gas', out_name=path['Cantera_Mech'], quiet=False, permissive=True) else: - surfaces = ck2yaml.convert_mech(path['mech'], thermo_file=None, transport_file=None, surface_file=None, + gas = ck2yaml.convert_mech(path['mech'], thermo_file=None, transport_file=None, surface_file=None, phase_name='gas', out_name=path['Cantera_Mech'], quiet=False, permissive=True) - return surfaces + return gas def loader(self, path): # path is assumed to be the path dictionary - surfaces = [] + gas = [] if path['mech'].suffix in ['.yaml', '.yml']: # check if it's a yaml cantera file mech_path = str(path['mech']) else: # if not convert into yaml cantera file @@ -42,14 +44,14 @@ def loader(self, path): raise Exception('not implemented') #ctml2yaml.convert(path['mech'], path['Cantera_Mech']) else: # if not a cantera file, assume chemkin - surfaces = chemkin2cantera(path) + gas = chemkin2cantera(path) - print('Validating mechanism...', end='') + print('Validating mechanism...', end='') try: # This test taken from ck2cti yaml_txt = path['Cantera_Mech'].read_text() self.gas = ct.Solution(yaml=yaml_txt) - for surfname in surfaces: - phase = ct.Interface(mech_path, surfname, [self.gas]) + for gas_name in gas: + phase = ct.Interface(mech_path, gas_name, [self.gas]) print('PASSED.') except RuntimeError as e: print('FAILED.') @@ -79,12 +81,15 @@ def loader(self, path): elif 'PASSED' in ct_out: output['success'] = True self.isLoaded = True + + n_species = self.gas.n_species + n_rxn = self.gas.n_reactions + + output['message'].append(f'Wrote YAML mechanism file to {path["Cantera_Mech"]}.') + output['message'].append(f'Mechanism contains {n_species} species and {n_rxn} reactions.') for log_str in [ct_out, ct_err]: if log_str != '' and not silent: - if (path['Cantera_Mech'], pathlib.WindowsPath): # reformat string to remove \\ making it unable to be copy paste - cantera_path = str(path['Cantera_Mech']).replace('\\', '\\\\') - log_str = log_str.replace(cantera_path, str(path['Cantera_Mech'])) output['message'].append(log_str) output['message'].append('\n') @@ -118,65 +123,75 @@ def get_Arrhenius_parameters(entry): # Set kinetics data rxns = [] for rxnIdx in range(len(mech_dict)): - if 'ElementaryReaction' == mech_dict[rxnIdx]['rxnType']: - rxn = ct.ElementaryReaction(mech_dict[rxnIdx]['reactants'], mech_dict[rxnIdx]['products']) - rxn.allow_negative_pre_exponential_factor = True - + if 'Arrhenius Reaction' == mech_dict[rxnIdx]['rxnType']: A, b, Ea = get_Arrhenius_parameters(mech_dict[rxnIdx]['rxnCoeffs'][0]) - rxn.rate = ct.Arrhenius(A, b, Ea) + rate = ct.ArrheniusRate(A, b, Ea) - elif 'ThreeBodyReaction' == mech_dict[rxnIdx]['rxnType']: - rxn = ct.ThreeBodyReaction(mech_dict[rxnIdx]['reactants'], mech_dict[rxnIdx]['products']) + rxn = ct.Reaction(mech_dict[rxnIdx]['reactants'], mech_dict[rxnIdx]['products'], rate) + elif 'Three Body Reaction' == mech_dict[rxnIdx]['rxnType']: A, b, Ea = get_Arrhenius_parameters(mech_dict[rxnIdx]['rxnCoeffs'][0]) - rxn.rate = ct.Arrhenius(A, b, Ea) + rate = ct.ArrheniusRate(A, b, Ea) + + rxn = ct.ThreeBodyReaction(mech_dict[rxnIdx]['reactants'], mech_dict[rxnIdx]['products'], rate) rxn.efficiencies = mech_dict[rxnIdx]['rxnCoeffs'][0]['efficiencies'] - elif 'PlogReaction' == mech_dict[rxnIdx]['rxnType']: - rxn = ct.PlogReaction(mech_dict[rxnIdx]['reactants'], mech_dict[rxnIdx]['products']) - + elif 'Plog Reaction' == mech_dict[rxnIdx]['rxnType']: rates = [] for plog in mech_dict[rxnIdx]['rxnCoeffs']: pressure = plog['Pressure'] A, b, Ea = get_Arrhenius_parameters(plog) rates.append((pressure, ct.Arrhenius(A, b, Ea))) - rxn.rates = rates - - elif 'FalloffReaction' == mech_dict[rxnIdx]['rxnType']: - rxn = ct.FalloffReaction(mech_dict[rxnIdx]['reactants'], mech_dict[rxnIdx]['products']) + rate = ct.PlogRate(rates) + rxn = ct.Reaction(mech_dict[rxnIdx]['reactants'], mech_dict[rxnIdx]['products'], rate) + elif 'Falloff Reaction' == mech_dict[rxnIdx]['rxnType']: # high pressure limit A, b, Ea = get_Arrhenius_parameters(mech_dict[rxnIdx]['rxnCoeffs']['high_rate']) - rxn.high_rate = ct.Arrhenius(A, b, Ea) + high_rate = ct.Arrhenius(A, b, Ea) # low pressure limit A, b, Ea = get_Arrhenius_parameters(mech_dict[rxnIdx]['rxnCoeffs']['low_rate']) - rxn.low_rate = ct.Arrhenius(A, b, Ea) + low_rate = ct.Arrhenius(A, b, Ea) # falloff parameters - if mech_dict[rxnIdx]['rxnCoeffs']['falloff_type'] == 'Troe': - falloff_param = mech_dict[rxnIdx]['rxnCoeffs']['falloff_parameters'] - if falloff_param[-1] == 0.0: - falloff_param = falloff_param[0:-1] + falloff_type = mech_dict[rxnIdx]['rxnCoeffs']['falloff_type'] + falloff_coeffs = mech_dict[rxnIdx]['rxnCoeffs']['falloff_parameters'] - rxn.falloff = ct.TroeFalloff(falloff_param) - else: - rxn.falloff = ct.SriFalloff(mech_dict[rxnIdx]['rxnCoeffs']['falloff_parameters']) + if falloff_type == 'Lindemann': + rate = ct.LindemannRate(low_rate, high_rate, falloff_coeffs) + + elif falloff_type == 'Tsang': + rate = ct.TsangRate(low_rate, high_rate, falloff_coeffs) + + elif falloff_type == 'Troe': + if falloff_coeffs[-1] == 0.0: + falloff_coeffs = falloff_coeffs[0:-1] + + rate = ct.TroeRate(low_rate, high_rate, falloff_coeffs) + elif falloff_type == 'SRI': + rate = ct.SriRate(low_rate, high_rate, falloff_coeffs) + + rxn = ct.FalloffReaction(mech_dict[rxnIdx]['reactants'], mech_dict[rxnIdx]['products'], rate) rxn.efficiencies = mech_dict[rxnIdx]['rxnCoeffs']['efficiencies'] - elif 'ChebyshevReaction' == mech_dict[rxnIdx]['rxnType']: - rxn = ct.ChebyshevReaction(mech_dict[rxnIdx]['reactants'], mech_dict[rxnIdx]['products']) - rxn.set_parameters(Tmin=mech_dict['Tmin'], Tmax=mech_dict['Tmax'], - Pmin=mech_dict['Pmin'], Pmax=mech_dict['Pmax'], - coeffs=mech_dict['coeffs']) + elif 'Chebyshev Reaction' == mech_dict[rxnIdx]['rxnType']: + rxn = ct.ChebyshevRate([mech_dict['Tmin'], mech_dict['Tmax']], + [mech_dict['Pmin'], mech_dict['Pmax']], + mech_dict['coeffs']) + + rxn = ct.Reaction(mech_dict[rxnIdx]['reactants'], mech_dict[rxnIdx]['products'], rate) rxn.duplicate = mech_dict[rxnIdx]['duplicate'] rxn.reversible = mech_dict[rxnIdx]['reversible'] rxn.allow_negative_orders = True rxn.allow_nonreactant_orders = True + if hasattr(rxn, "allow_negative_pre_exponential_factor"): + rxn.allow_negative_pre_exponential_factor = True + rxns.append(rxn) self.gas = ct.Solution(thermo='IdealGas', kinetics='GasKinetics', @@ -185,7 +200,24 @@ def get_Arrhenius_parameters(entry): self.set_rate_expression_coeffs(bnds) # set copy of coeffs self.set_thermo_expression_coeffs() # set copy of thermo coeffs - def gas(self): return self.gas + def gas(self): return self.gas + + + def reaction_type(self, rxn): + if type(rxn.rate) is ct.ArrheniusRate: + if rxn.reaction_type == "three-body": + return "Three Body Reaction" + else: + return "Arrhenius Reaction" + elif type(rxn.rate) is ct.PlogRate: + return "Plog Reaction" + elif type(rxn.rate) in [ct.FalloffRate, ct.LindemannRate, ct.TsangRate, ct.TroeRate, ct.SriRate]: + return "Falloff Reaction" + elif type(rxn.rate) is ct.ChebyshevRate: + return "Chebyshev Reaction" + else: + return str(type(rxn.rate)) + def set_rate_expression_coeffs(self, bnds=[]): def copy_bnds(new_bnds, bnds, rxnIdx, bnds_type, keys=[]): @@ -211,94 +243,96 @@ def copy_bnds(new_bnds, bnds, rxnIdx, bnds_type, keys=[]): for rxnIdx, rxn in enumerate(self.gas.reactions()): rate_bnds.append({'value': np.nan, 'limits': Uncertainty('rate', rxnIdx, rate_bnds=rate_bnds), 'type': 'F', 'opt': False}) rate_bnds = copy_bnds(rate_bnds, bnds, rxnIdx, 'rate') - if type(rxn) in [ct.ElementaryReaction, ct.ThreeBodyReaction]: - attrs = [p for p in dir(rxn.rate) if not p.startswith('_')] # attributes not including __ - coeffs.append([{attr: getattr(rxn.rate, attr) for attr in attrs}]) - if type(rxn) is ct.ThreeBodyReaction: - coeffs[-1][0]['efficiencies'] = rxn.efficiencies + + rxn_type = self.reaction_type(rxn) + + if rxn_type in ["Arrhenius Reaction", "Three Body Reaction"]: + coeffs.append([{attr: getattr(rxn.rate, attr) for attr in arrhenius_coefNames}]) + if rxn_type == "Three Body Reaction": + coeffs[-1][0]['efficiencies'] = rxn.third_body.efficiencies coeffs_bnds.append({'rate': {attr: {'resetVal': coeffs[-1][0][attr], 'value': np.nan, 'limits': Uncertainty('coef', rxnIdx, key='rate', coef_name=attr, coeffs_bnds=coeffs_bnds), - 'type': 'F', 'opt': False} for attr in attrs}}) + 'type': 'F', 'opt': False} for attr in arrhenius_coefNames}}) - coeffs_bnds = copy_bnds(coeffs_bnds, bnds, rxnIdx, 'coeffs', ['rate', attrs]) + coeffs_bnds = copy_bnds(coeffs_bnds, bnds, rxnIdx, 'coeffs', ['rate', arrhenius_coefNames]) - reset_mech.append({'reactants': rxn.reactants, 'products': rxn.products, 'rxnType': rxn.__class__.__name__, + reset_mech.append({'reactants': rxn.reactants, 'products': rxn.products, 'rxnType': rxn_type, 'duplicate': rxn.duplicate, 'reversible': rxn.reversible, 'orders': rxn.orders, 'rxnCoeffs': deepcopy(coeffs[-1])}) - elif type(rxn) is ct.PlogReaction: + elif rxn_type == "Plog Reaction": coeffs.append([]) coeffs_bnds.append({}) - for n, rate in enumerate(rxn.rates): - attrs = [p for p in dir(rate[1]) if not p.startswith('_')] # attributes not including __ + for n, rate in enumerate(rxn.rate.rates): coeffs[-1].append({'Pressure': rate[0]}) - coeffs[-1][-1].update({attr: getattr(rate[1], attr) for attr in attrs}) - if n == 0 or n == len(rxn.rates)-1: # only going to allow coefficient uncertainties to be placed on upper and lower pressures + coeffs[-1][-1].update({attr: getattr(rate[1], attr) for attr in arrhenius_coefNames}) + if n == 0 or n == len(rxn.rate.rates)-1: # only going to allow coefficient uncertainties to be placed on upper and lower pressures if n == 0: key = 'low_rate' else: key = 'high_rate' coeffs_bnds[-1][key] = {attr: {'resetVal': coeffs[-1][-1][attr], 'value': np.nan, 'limits': Uncertainty('coef', rxnIdx, key=key, coef_name=attr, coeffs_bnds=coeffs_bnds), - 'type': 'F', 'opt': False} for attr in attrs} + 'type': 'F', 'opt': False} for attr in arrhenius_coefNames} - coeffs_bnds = copy_bnds(coeffs_bnds, bnds, rxnIdx, 'coeffs', [key, attrs]) + coeffs_bnds = copy_bnds(coeffs_bnds, bnds, rxnIdx, 'coeffs', [key, arrhenius_coefNames]) - reset_mech.append({'reactants': rxn.reactants, 'products': rxn.products, 'rxnType': rxn.__class__.__name__, + reset_mech.append({'reactants': rxn.reactants, 'products': rxn.products, 'rxnType': rxn_type, 'duplicate': rxn.duplicate, 'reversible': rxn.reversible, 'orders': rxn.orders, 'rxnCoeffs': deepcopy(coeffs[-1])}) - elif type(rxn) is ct.FalloffReaction: + elif rxn_type == "Falloff Reaction": coeffs_bnds.append({}) - coeffs.append({'falloff_type': rxn.falloff.type, 'high_rate': [], 'low_rate': [], 'falloff_parameters': list(rxn.falloff.parameters), - 'default_efficiency': rxn.default_efficiency, 'efficiencies': rxn.efficiencies}) + fallof_type = rxn.reaction_type.split('-')[1] + + coeffs.append({'falloff_type': fallof_type, 'high_rate': [], 'low_rate': [], 'falloff_parameters': list(rxn.rate.falloff_coeffs), + 'default_efficiency': rxn.third_body.default_efficiency, 'efficiencies': rxn.third_body.efficiencies}) for key in ['low_rate', 'high_rate']: - rate = getattr(rxn, key) - attrs = [p for p in dir(rate) if not p.startswith('_')] # attributes not including __ - coeffs[-1][key] = {attr: getattr(rate, attr) for attr in attrs} + rate = getattr(rxn.rate, key) + coeffs[-1][key] = {attr: getattr(rate, attr) for attr in arrhenius_coefNames} coeffs_bnds[-1][key] = {attr: {'resetVal': coeffs[-1][key][attr], 'value': np.nan, 'limits': Uncertainty('coef', rxnIdx, key=key, coef_name=attr, coeffs_bnds=coeffs_bnds), - 'type': 'F', 'opt': False} for attr in attrs} + 'type': 'F', 'opt': False} for attr in arrhenius_coefNames} - coeffs_bnds = copy_bnds(coeffs_bnds, bnds, rxnIdx, 'coeffs', [key, attrs]) + coeffs_bnds = copy_bnds(coeffs_bnds, bnds, rxnIdx, 'coeffs', [key, arrhenius_coefNames]) key = 'falloff_parameters' - n_coef = len(rxn.falloff.parameters) + n_coef = len(rxn.rate.falloff_coeffs) coeffs_bnds[-1][key] = {n: {'resetVal': coeffs[-1][key][n], 'value': np.nan, 'limits': Uncertainty('coef', rxnIdx, key=key, coef_name=n, coeffs_bnds=coeffs_bnds), 'type': 'F', 'opt': True} for n in range(0,n_coef)} - reset_mech.append({'reactants': rxn.reactants, 'products': rxn.products, 'rxnType': rxn.__class__.__name__, + reset_mech.append({'reactants': rxn.reactants, 'products': rxn.products, 'rxnType': rxn_type, 'duplicate': rxn.duplicate, 'reversible': rxn.reversible, 'orders': rxn.orders, - 'falloffType': rxn.falloff.type, 'rxnCoeffs': deepcopy(coeffs[-1])}) + 'falloffType': fallof_type, 'rxnCoeffs': deepcopy(coeffs[-1])}) - elif type(rxn) is ct.ChebyshevReaction: + elif rxn_type == "Chebyshev Reaction": coeffs.append({}) coeffs_bnds.append({}) - if len(bnds) == 0: - rate_bnds.append({}) - reset_coeffs = {'Pmin': rxn.Pmin, 'Pmax': rxn.Pmax, 'Tmin': rxn.Tmin, 'Tmax': rxn.Tmax, 'coeffs': rxn.coeffs} - reset_mech.append({'reactants': rxn.reactants, 'products': rxn.products, 'rxnType': rxn.__class__.__name__, + reset_coeffs = {'Pmin': rxn.rate.pressure_range[0], 'Pmax': rxn.rate.pressure_range[1], + 'Tmin': rxn.rate.temperature_range[0], 'Tmax': rxn.rate.temperature_range[1], + 'coeffs': rxn.rate.data} + reset_mech.append({'reactants': rxn.reactants, 'products': rxn.products, 'rxnType': rxn_type, 'duplicate': rxn.duplicate, 'reversible': rxn.reversible, 'orders': rxn.orders, 'rxnCoeffs': reset_coeffs}) else: coeffs.append({}) coeffs_bnds.append({}) - if len(bnds) == 0: - rate_bnds.append({}) - reset_mech.append({'reactants': rxn.reactants, 'products': rxn.products, 'rxnType': rxn.__class__.__name__}) - raise(f'{rxn} is a {rxn.__class__.__name__} and is currently unsupported in Frhodo, but this error should never be seen') + reset_mech.append({'reactants': rxn.reactants, 'products': rxn.products, 'rxnType': rxn_type}) + msg = f'{rxn} is a {rxn_type} and is currently unsupported in Frhodo' + raise(Exception(msg)) + def get_coeffs_keys(self, rxn, coefAbbr, rxnIdx=None): - if type(rxn) in [ct.ElementaryReaction, ct.ThreeBodyReaction]: + if type(rxn.rate) is ct.ArrheniusRate: bnds_key = 'rate' coef_key = 0 - elif type(rxn) is ct.PlogReaction: + elif type(rxn.rate) is ct.PlogRate: if 'high' in coefAbbr: if rxnIdx is None: # get reaction index if not provided for rxnIdx, mechRxn in enumerate(self.gas.reactions()): @@ -311,7 +345,7 @@ def get_coeffs_keys(self, rxn, coefAbbr, rxnIdx=None): bnds_key = 'low_rate' coef_key = 0 - elif type(rxn) is ct.FalloffReaction: + elif type(rxn.rate) in [ct.FalloffRate, ct.TroeRate, ct.SriRate]: if 'high' in coefAbbr: coef_key = bnds_key = 'high_rate' elif 'low' in coefAbbr: @@ -343,7 +377,8 @@ def modify_reactions(self, coeffs, rxnIdxs=[]): # Only works for Arrhenius e for rxnIdx in rxnIdxs: rxn = self.gas.reaction(rxnIdx) rxnChanged = False - if type(rxn) in [ct.ElementaryReaction, ct.ThreeBodyReaction]: + + if type(rxn.rate) is ct.ArrheniusRate: for coefName in ['activation_energy', 'pre_exponential_factor', 'temperature_exponent']: if coeffs[rxnIdx][0][coefName] != eval(f'rxn.rate.{coefName}'): rxnChanged = True @@ -352,34 +387,42 @@ def modify_reactions(self, coeffs, rxnIdxs=[]): # Only works for Arrhenius e A = coeffs[rxnIdx][0]['pre_exponential_factor'] b = coeffs[rxnIdx][0]['temperature_exponent'] Ea = coeffs[rxnIdx][0]['activation_energy'] - rxn.rate = ct.Arrhenius(A, b, Ea) + rxn.rate = ct.ArrheniusRate(A, b, Ea) - elif type(rxn) is ct.FalloffReaction: - for key in ['low_rate', 'high_rate', 'falloff_parameters']: + elif type(rxn.rate) in [ct.FalloffRate, ct.TroeRate, ct.SriRate]: + rate_dict = {'low_rate': None, 'high_rate': None, 'falloff_parameters': None} + for key in rate_dict.keys(): if 'rate' in key: for coefName in ['activation_energy', 'pre_exponential_factor', 'temperature_exponent']: - if coeffs[rxnIdx][key][coefName] != eval(f'rxn.{key}.{coefName}'): + if coeffs[rxnIdx][key][coefName] != eval(f'rxn.rate.{key}.{coefName}'): rxnChanged = True A = coeffs[rxnIdx][key]['pre_exponential_factor'] b = coeffs[rxnIdx][key]['temperature_exponent'] Ea = coeffs[rxnIdx][key]['activation_energy'] - setattr(rxn, key, ct.Arrhenius(A, b, Ea)) + rate_dict[key] = ct.Arrhenius(A, b, Ea) break else: - length_different = len(coeffs[rxnIdx][key]) != len(rxn.falloff.parameters) - if length_different or (coeffs[rxnIdx][key] != rxn.falloff.parameters).any(): + length_different = len(coeffs[rxnIdx][key]) != len(rxn.rate.falloff_coeffs) + if length_different or (coeffs[rxnIdx][key] != rxn.rate.falloff_coeffs).any(): rxnChanged = True if coeffs[rxnIdx]['falloff_type'] == 'Troe': if coeffs[rxnIdx][key][-1] == 0.0: - rxn.falloff = ct.TroeFalloff(coeffs[rxnIdx][key][:-1]) + rate_dict[key] = coeffs[rxnIdx][key][:-1] else: - rxn.falloff = ct.TroeFalloff(coeffs[rxnIdx][key]) + rate_dict[key] = coeffs[rxnIdx][key] else: # could also be SRI. For optimization this would need to be cast as Troe - rxn.falloff = ct.SriFalloff(coeffs[rxnIdx][key]) + rate_dict[key] = ct.SriFalloff(coeffs[rxnIdx][key]) + + if coeffs[rxnIdx]['falloff_type'] == 'Troe': + rate = ct.TroeRate(rate_dict['low_rate'], rate_dict['high_rate'], rate_dict['falloff_parameters']) + else: + rate = ct.SriRate(rate_dict['low_rate'], rate_dict['high_rate'], rate_dict['falloff_parameters']) + + rxn.rate = rate - elif type(rxn) is ct.ChebyshevReaction: + elif type(rxn.rate) is ct.ChebyshevRate: pass else: continue @@ -393,7 +436,7 @@ def modify_reactions(self, coeffs, rxnIdxs=[]): # Only works for Arrhenius e def rxn2Troe(self, rxnIdx, HPL, LPL, eff={}): reactants = self.gas.reaction(rxnIdx).reactants products = self.gas.reaction(rxnIdx).products - r = ct.FalloffReaction(reactants, products) + r = ct.FalloffRate(reactants, products) print(r) #r.high_rate = ct.Arrhenius(7.4e10, -0.37, 0.0) #r.low_rate = ct.Arrhenius(2.3e12, -0.9, -1700*1000*4.184) @@ -455,18 +498,18 @@ def reset(self, rxnIdxs=None, coefNames=None): if coefNames is None: # resets all coefficients in rxn self.coeffs[rxnIdx] = self.reset_mech[rxnIdx]['rxnCoeffs'] - elif self.reset_mech[rxnIdx]['rxnType'] in ['ElementaryReaction', 'ThreeBodyReaction']: + elif self.reset_mech[rxnIdx]['rxnType'] in ['Arrhenius Reaction', 'Three Body Reaction']: for coefName in coefNames: self.coeffs[rxnIdx][coefName] = self.reset_mech[rxnIdx]['rxnCoeffs'][coefName] - elif 'PlogReaction' == self.reset_mech[rxnIdx]['rxnType']: + elif 'Plog Reaction' == self.reset_mech[rxnIdx]['rxnType']: for [limit_type, coefName] in coefNames: if limit_type == 'low_rate': self.coeffs[rxnIdx][0][coefName] = self.reset_mech[rxnIdx]['rxnCoeffs'][0][coefName] elif limit_type == 'high_rate': self.coeffs[rxnIdx][-1][coefName] = self.reset_mech[rxnIdx]['rxnCoeffs'][-1][coefName] - elif 'FalloffReaction' == self.reset_mech[rxnIdx]['rxnType']: + elif 'Falloff Reaction' == self.reset_mech[rxnIdx]['rxnType']: self.coeffs[rxnIdx]['falloff_type'] = self.reset_mech[rxnIdx]['falloffType'] for [limit_type, coefName] in coefNames: self.coeffs[rxnIdx][limit_type][coefName] = self.reset_mech[rxnIdx]['rxnCoeffs'][limit_type][coefName] @@ -502,11 +545,11 @@ def set_TPX(self, T, P, X=[]): def M(self, rxn, TPX=[]): # kmol/m^3 def get_M(rxn): M = self.gas.density_mole - if hasattr(rxn, 'efficiencies') and rxn.efficiencies: - M *= rxn.default_efficiency + if rxn.third_body is not None: + M *= rxn.third_body.default_efficiency for (s, conc) in zip(self.gas.species_names, self.gas.concentrations): - if s in rxn.efficiencies: - M += conc*(rxn.efficiencies[s] - 1.0) + if s in rxn.third_body.efficiencies: + M += conc*(rxn.third_body.efficiencies[s] - 1.0) else: M += conc return M diff --git a/src/calculate/optimize/adaptive_loss.py b/src/calculate/optimize/adaptive_loss.py new file mode 100644 index 0000000..faea5ea --- /dev/null +++ b/src/calculate/optimize/adaptive_loss.py @@ -0,0 +1,267 @@ +import numpy as np +from scipy.optimize import minimize_scalar +from scipy.interpolate import BSpline +import numba + +from .adaptive_loss_tck import tck +from calculate.convert_units import OoM_numba + + +numba_cache = False +loss_alpha_min = -100.0 + + +@numba.jit(nopython=True, cache=numba_cache) +def weighted_quantile( + values, quantiles, weights=np.array([]), values_presorted=False, old_style=False +): + """https://stackoverflow.com/questions/21844024/weighted-percentile-using-numpy + Very close to numpy.percentile, but supports weights. + NOTE: quantiles should be in [0, 1]! + :param values: numpy.array with data + :param quantiles: array-like with many quantiles needed + :param sample_weight: array-like of the same length as `array` + :param values_presorted: bool, if True, then will avoid sorting of + initial array + :param old_style: if True, will correct output to be consistent + with numpy.quantile. + :return: numpy.array with computed quantiles. + """ + finite_idx = np.where(np.isfinite(values)) + values = values[finite_idx] + if len(weights) == 0: + weights = np.ones_like(values) + else: + weights = weights[finite_idx] + + assert np.all(quantiles >= 0) and np.all( + quantiles <= 1 + ), "quantiles should be in [0, 1]" + + if not values_presorted: + sorter = np.argsort(values) + values = values[sorter] + weights = weights[sorter] + + res = np.cumsum(weights) - 0.5 * weights + if old_style: # To be convenient with numpy.quantile + res -= res[0] + res /= res[-1] + else: + res /= np.sum(weights) + + return np.interp(quantiles, res, values) + + +def remove_outliers(data, weights=np.array([]), sigma_threshold=3, quantile=0.25): + outlier_bnds = IQR_outlier(data, weights, sigma_threshold, quantile) + idx_no_outliers = np.argwhere( + (data >= outlier_bnds[0]) & (data <= outlier_bnds[1]) + ).flatten() + data_no_outliers = data[idx_no_outliers] + + return data_no_outliers, idx_no_outliers + + +@numba.jit(nopython=True, cache=numba_cache) +def IQR_outlier(data, weights=np.array([]), sigma_threshold=3, quantile=0.25): + # only use finite data + if len(weights) == 0: + q13 = np.quantile(data[np.isfinite(data)], np.array([quantile, 1 - quantile])) + else: # weighted_quantile could be used always, don't know speed + q13 = weighted_quantile( + data[np.isfinite(data)], np.array([quantile, 1 - quantile]), weights=weights + ) + + q13_scalar = ( + 0.7413 * sigma_threshold - 0.5 + ) # this is a pretty good fit to get the scalar for any sigma + iqr = np.diff(q13)[0] * q13_scalar + outlier_threshold = np.array([q13[0] - iqr, q13[1] + iqr]) + + return outlier_threshold + + +# TODO: uncertain if these C functions should use np.min, np.mean, or np.max +@numba.jit(nopython=True, error_model="numpy", cache=numba_cache) +def get_C(resid, mu, sigma, weights=np.array([]), C_scalar=1, quantile=0.25): + q13 = IQR_outlier( + resid - mu, weights=weights, sigma_threshold=sigma, quantile=quantile + ) + C = np.max(np.abs(q13)) + + if C == 0: + C = OoM_numba(np.array([np.max(q13)]), method="floor")[0] + + return C*C_scalar # decreasing outliers increases outlier rejection + + +@numba.jit(nopython=True, error_model="numpy", cache=numba_cache) +def generalized_loss_fcn( + x, a=2, a_min=loss_alpha_min +): # defaults to sum of squared error + x_2 = x**2 + + if a == 2.0: # L2 + loss = 0.5 * x_2 + elif a == 1.0: # smoothed L1 + loss = np.sqrt(x_2 + 1) - 1 + elif a == 0.0: # Charbonnier loss + loss = np.log(0.5 * x_2 + 1) + elif a == -2.0: # Cauchy/Lorentzian loss + loss = 2 * x_2 / (x_2 + 4) + elif a <= a_min: # at -infinity, Welsch/Leclerc loss + loss = 1 - np.exp(-0.5 * x_2) + else: + loss = np.abs(a - 2) / a * ((x_2 / np.abs(a - 2) + 1) ** (a / 2) - 1) + + return loss + + +@numba.jit(nopython=True, error_model="numpy", cache=numba_cache) +def generalized_loss_derivative(x, c=1, a=2): + if a == 2.0: # L2 + dloss_dx = x / c**2 + elif a == 1.0: # smoothed L1 + dloss_dx = x / c**2 / np.sqrt((x / c) ** 2 + 1) + elif a == 0.0: # Charbonnier loss + dloss_dx = 2 * x / (x**2 + 2 * c**2) + elif a == -2.0: # Cauchy/Lorentzian loss + dloss_dx = 16 * c**2 * x / (4 * c**2 + x**2) ** 2 + elif a <= loss_alpha_min: # at -infinity, Welsch/Leclerc loss + dloss_dx = x / c**2 * np.exp(-0.5 * (x / c) ** 2) + else: + dloss_dx = x / c**2 * ((x / c) ** 2 / np.abs(a - 2) + 1) + + return dloss_dx + + +@numba.jit(nopython=True, error_model="numpy", cache=numba_cache) +def generalized_loss_weights(x: np.ndarray, a: float = 2, min_weight: float = 0.00): + w = np.ones(len(x), dtype=numba.float64) + for i, xi in enumerate(x): + if a == 2 or xi <= 0: + w[i] = 1 + elif a == 0: + w[i] = 1 / (0.5 * xi**2 + 1) + elif a <= loss_alpha_min: + w[i] = np.exp(-0.5 * xi**2) + else: + w[i] = (xi**2 / np.abs(a - 2) + 1) ** (0.5 * a - 1) + + return w * (1 - min_weight) + min_weight + + +# approximate partition function for C=1, tau(alpha < 0)=1E5, tau(alpha >= 0)=inf +# error < 4E-7 +ln_Z_fit = BSpline.construct_fast(*tck) +ln_Z_inf = 11.206072645530174 +def ln_Z(alpha, alpha_min=-1E6): + if alpha <= alpha_min: + return ln_Z_inf + + return ln_Z_fit(alpha) + + +# penalize the loss function using approximate partition function +# default to L2 loss +def penalized_loss_fcn(x, a=2, use_penalty=True): + loss = generalized_loss_fcn(x, a) + + if use_penalty: + penalty = ln_Z(a, loss_alpha_min) # approximate partition function for C=1, tau=10 + loss += penalty + + if not np.isfinite(loss).all(): + # print("a: ", a) + # print("x: ", x) + # print("penalty: ", penalty) + raise Exception("non-finite values in 'penalized_loss_fcn'") + + return loss + + +@numba.jit(nopython=True, error_model='numpy', cache=numba_cache) +def alpha_scaled(s, a_max=2): + if a_max == 2: + a = 3 + b = 0.25 + + if s < 0: + s = 0 + + if s > 1: + s = 1 + + s_max = (1 - 2/(1 + 10**a)) + s = (1 - 2/(1 + 10**(a*s**b)))/s_max + + alpha = loss_alpha_min + (2 - loss_alpha_min)*s + + else: + x0 = 1 + k = 1.5 # 1 or 1.5, testing required + + if s >= 1: + return 100 + elif s <= 0: + return -100 + + A = (np.exp((100 - x0)/k) + 1)/(1 - np.exp(200/k)) + K = (1 - A)*np.exp((x0 - 100)/k) + 1 + + alpha = x0 - k*np.log((K - A)/(s - A) - 1) + + return alpha + + +def adaptive_loss_fcn(x, mu=0, c=1, alpha="adaptive", replace_nonfinite=True): + if np.all(mu != 0) or np.all(c != 1): + x = (x - mu) / c # standardized residuals + + if replace_nonfinite: + x[~np.isfinite(x)] = np.max(x) + + loss_alpha_fcn = lambda alpha: penalized_loss_fcn( + x, a=alpha, use_penalty=True + ).sum() + + if alpha == "adaptive": # + res = minimize_scalar( + lambda s: loss_alpha_fcn(alpha_scaled(s)), + bounds=[-1e-5, 1 + 1e-5], + method="bounded", + options={"xtol": 1e-5}, + ) + loss_alpha = alpha_scaled(res.x) + # res = minimize(lambda s: loss_alpha_fcn(alpha_scaled(s[0])), x0=[0.7], bounds=[[0, 1]], method="L-BFGS-B") + # loss_alpha = alpha_scaled(res.x[0]) + loss_fcn_val = res.fun + + else: + loss_alpha = alpha + loss_fcn_val = loss_alpha_fcn(alpha) + + return loss_fcn_val, loss_alpha + + +# Assumes that x has not been standardized +def adaptive_weights( + x, weights=np.array([]), C_scalar=1, alpha="adaptive", + sigma=3, quantile=0.25, min_weight=0.00, replace_nonfinite=True +): + x_no_outlier, _ = remove_outliers(x, sigma_threshold=sigma, quantile=0.25) + + # TODO: Should x be abs or not? + # mu = np.median(np.abs(x_no_outlier)) + mu = np.median(x_no_outlier) + + C = get_C(x, mu, sigma, weights, C_scalar, quantile) + x = (x - mu) / C + + if alpha == "adaptive": + _, alpha = adaptive_loss_fcn( + x, alpha=alpha, replace_nonfinite=replace_nonfinite + ) + + return generalized_loss_weights(x, a=alpha, min_weight=min_weight), C, alpha diff --git a/src/calculate/optimize/adaptive_loss_tck.py b/src/calculate/optimize/adaptive_loss_tck.py new file mode 100644 index 0000000..33d5b1d --- /dev/null +++ b/src/calculate/optimize/adaptive_loss_tck.py @@ -0,0 +1,180 @@ +import numpy as np + +tck = (np.array([-1.00000000e+02, -1.00000000e+02, -1.00000000e+02, -1.00000000e+02, + -1.00000000e+02, -1.00000000e+02, -9.89361859e+01, -9.78769353e+01, + -9.68222422e+01, -9.57721311e+01, -9.47266052e+01, -9.36856785e+01, + -9.26493549e+01, -9.16176510e+01, -9.05905750e+01, -8.95681338e+01, + -8.85503451e+01, -8.75372137e+01, -8.65287527e+01, -8.55249701e+01, + -8.45258826e+01, -8.35314946e+01, -8.25418228e+01, -8.15568742e+01, + -8.05766627e+01, -7.96012006e+01, -7.86304995e+01, -7.76645697e+01, + -7.67034243e+01, -7.57470661e+01, -7.47955220e+01, -7.38487948e+01, + -7.29069029e+01, -7.19698546e+01, -7.10376588e+01, -7.01103331e+01, + -6.91878955e+01, -6.82703491e+01, -6.73577126e+01, -6.64499973e+01, + -6.55472139e+01, -6.46493825e+01, -6.37565132e+01, -6.28686175e+01, + -6.19857154e+01, -6.11078190e+01, -6.02349388e+01, -5.93670934e+01, + -5.85042947e+01, -5.76465626e+01, -5.67939033e+01, -5.59463397e+01, + -5.51038844e+01, -5.42665479e+01, -5.34343606e+01, -5.26073239e+01, + -5.17854599e+01, -5.09687840e+01, -5.01573151e+01, -4.93510657e+01, + -4.85500572e+01, -4.77543027e+01, -4.69638198e+01, -4.61786326e+01, + -4.53987564e+01, -4.46242021e+01, -4.38549956e+01, -4.30911563e+01, + -4.23327002e+01, -4.15796433e+01, -4.08320136e+01, -4.00898242e+01, + -3.93530985e+01, -3.86218530e+01, -3.78961169e+01, -3.71759034e+01, + -3.64612346e+01, -3.57521353e+01, -3.50486253e+01, -3.43507255e+01, + -3.36584656e+01, -3.29718664e+01, -3.22909472e+01, -3.16157312e+01, + -3.09462522e+01, -3.02825245e+01, -2.96245821e+01, -2.89724443e+01, + -2.83261393e+01, -2.76856938e+01, -2.70511383e+01, -2.64224972e+01, + -2.57997997e+01, -2.51830719e+01, -2.45723506e+01, -2.39676630e+01, + -2.33690331e+01, -2.27765001e+01, -2.21900941e+01, -2.16098480e+01, + -2.10357920e+01, -2.04679677e+01, -1.99064059e+01, -1.93511408e+01, + -1.88022101e+01, -1.82596582e+01, -1.77235133e+01, -1.71938267e+01, + -1.66706289e+01, -1.61539696e+01, -1.56438888e+01, -1.51404339e+01, + -1.46436468e+01, -1.41535762e+01, -1.36702706e+01, -1.31937875e+01, + -1.27241733e+01, -1.22614812e+01, -1.18057714e+01, -1.13571013e+01, + -1.09155320e+01, -1.04811305e+01, -1.00539633e+01, -9.63409624e+00, + -9.22161100e+00, -8.81657996e+00, -8.41909527e+00, -8.02924219e+00, + -7.64712055e+00, -7.27284027e+00, -6.90651906e+00, -6.54828863e+00, + -6.19830350e+00, -5.85673528e+00, -5.52383861e+00, -5.19984603e+00, + -4.88505373e+00, -4.58000136e+00, -4.28507371e+00, -4.00123186e+00, + -3.72891771e+00, -3.46975160e+00, -3.22437193e+00, -2.99393266e+00, + -2.82276138e+00, -2.61883218e+00, -2.43017422e+00, -2.25593926e+00, + -2.09588408e+00, -1.94907151e+00, -1.89820406e+00, -1.89619413e+00, + -1.89581028e+00, -1.89563092e+00, -1.89550466e+00, -1.89539596e+00, + -1.89530799e+00, -1.89523234e+00, -1.89515983e+00, -1.89509618e+00, + -1.89503580e+00, -1.89497709e+00, -1.89491628e+00, -1.89485245e+00, + -1.89478574e+00, -1.89470777e+00, -1.89462133e+00, -1.89452009e+00, + -1.89439581e+00, -1.89423663e+00, -1.89396741e+00, -1.86050313e+00, + -1.73225064e+00, -1.61345288e+00, -1.50404046e+00, -1.40317150e+00, + -1.31002924e+00, -1.22379266e+00, -1.14438410e+00, -1.07042988e+00, + -1.00198483e+00, -9.37982760e-01, -8.78393703e-01, -8.22858167e-01, + -7.70713415e-01, -7.21693169e-01, -6.75489158e-01, -6.31756155e-01, + -5.90145042e-01, -5.50438977e-01, -5.11929011e-01, -4.73779395e-01, + -4.31618583e-01, -3.92781450e-01, -3.65429131e-01, -3.42432936e-01, + -3.22400780e-01, -3.04663604e-01, -2.88563766e-01, -2.73745130e-01, + -2.59977591e-01, -2.47006359e-01, -2.34459092e-01, -2.21779876e-01, + -2.06915915e-01, -1.94314435e-01, -1.84284677e-01, -1.75144063e-01, + -1.66399622e-01, -1.57587843e-01, -1.47896761e-01, -1.35800722e-01, + -1.26229577e-01, -1.17397288e-01, -1.08566317e-01, -9.92690484e-02, + -8.89856367e-02, -7.53791943e-02, -6.13039785e-02, -4.86661484e-02, + -3.58472926e-02, -2.22378397e-02, -7.26184238e-03, -2.05587892e-03, + -1.42203296e-03, -7.09240561e-04, -4.74680027e-04, -3.31830926e-04, + -2.16225378e-04, -1.11938193e-04, -1.13163582e-05, 1.01821936e-04, + 2.37443677e-04, 4.15071062e-04, 1.38705239e-03, 1.30540042e-02, + 3.25012890e-02, 5.68008960e-02, 8.68674209e-02, 1.23110030e-01, + 1.67150300e-01, 2.19844037e-01, 2.82582173e-01, 3.56909933e-01, + 4.44551688e-01, 5.48261555e-01, 6.69373686e-01, 8.10210333e-01, + 9.68835440e-01, 1.13479207e+00, 1.29016927e+00, 1.37579828e+00, + 1.37756953e+00, 1.37792658e+00, 1.37811985e+00, 1.37830625e+00, + 1.37871338e+00, 1.39652311e+00, 1.50223094e+00, 1.60204852e+00, + 1.68444922e+00, 1.75210787e+00, 1.80739646e+00, 1.85184850e+00, + 1.88771768e+00, 1.91598008e+00, 1.93817596e+00, 1.95558092e+00, + 1.96868983e+00, 1.97837027e+00, 1.98544015e+00, 1.99072246e+00, + 1.99423491e+00, 1.99664031e+00, 1.99814454e+00, 1.99904346e+00, + 1.99942427e+00, 1.99958837e+00, 1.99971103e+00, 1.99981950e+00, + 1.99992971e+00, 2.00003967e+00, 2.00014572e+00, 2.00026154e+00, + 2.00040028e+00, 2.00059303e+00, 2.00105431e+00, 2.00211390e+00, + 2.00381552e+00, 2.00661314e+00, 2.01062515e+00, 2.01630706e+00, + 2.02413336e+00, 2.03496582e+00, 2.04922868e+00, 2.06754202e+00, + 2.09116731e+00, 2.12103970e+00, 2.15821246e+00, 2.20394097e+00, + 2.26008195e+00, 2.32805389e+00, 2.40948866e+00, 2.50698102e+00, + 2.62276727e+00, 2.76051403e+00, 2.92292363e+00, 3.11499335e+00, + 3.34113550e+00, 3.60831022e+00, 3.91873624e+00, 4.28145146e+00, + 4.69630296e+00, 5.16146905e+00, 5.66972575e+00, 6.21509727e+00, + 6.79391125e+00, 7.40309939e+00, 8.04104432e+00, 8.70650460e+00, + 9.39861761e+00, 1.01166873e+01, 1.08600969e+01, 1.16283340e+01, + 1.24209373e+01, 1.32374826e+01, 1.40775899e+01, 1.49409044e+01, + 1.58270879e+01, 1.67358346e+01, 1.76668531e+01, 1.86198662e+01, + 1.95946087e+01, 2.05908354e+01, 2.16083077e+01, 2.26468007e+01, + 2.37061000e+01, 2.47859931e+01, 2.58862849e+01, 2.70067835e+01, + 2.81473059e+01, 2.93076770e+01, 3.04877181e+01, 3.16872722e+01, + 3.29061767e+01, 3.41442795e+01, 3.54014286e+01, 3.66774778e+01, + 3.79722954e+01, 3.92857357e+01, 4.06176708e+01, 4.19679742e+01, + 4.33365167e+01, 4.47231769e+01, 4.61278389e+01, 4.75503902e+01, + 4.89907124e+01, 5.04486988e+01, 5.19242426e+01, 5.34172417e+01, + 5.49275879e+01, 5.64551893e+01, 5.79999442e+01, 5.95617557e+01, + 6.11405369e+01, 6.27361932e+01, 6.43486345e+01, 6.59777742e+01, + 6.76235267e+01, 6.92858121e+01, 7.09645466e+01, 7.26596467e+01, + 7.43710380e+01, 7.60986397e+01, 7.78423781e+01, 7.96021751e+01, + 8.13779634e+01, 8.31696671e+01, 8.49772153e+01, 8.68005409e+01, + 8.86395775e+01, 9.04942561e+01, 9.23645061e+01, 9.42502731e+01, + 9.61514847e+01, 9.80680795e+01, 1.00000000e+02, 1.00000000e+02, + 1.00000000e+02, 1.00000000e+02, 1.00000000e+02, 1.00000000e+02]), + np.array([11.18607265, 11.18603009, 11.18594472, 11.18581557, 11.18564097, + 11.18541843, 11.18519199, 11.18496156, 11.18472703, 11.1844883 , + 11.18424527, 11.18399784, 11.18374588, 11.1834893 , 11.18322796, + 11.18296175, 11.18269055, 11.18241421, 11.18213262, 11.18184562, + 11.18155308, 11.18125485, 11.18095078, 11.18064069, 11.18032444, + 11.18000186, 11.17967275, 11.17933695, 11.17899427, 11.1786445 , + 11.17828745, 11.17792291, 11.17755065, 11.17717045, 11.17678209, + 11.1763853 , 11.17597985, 11.17556547, 11.17514188, 11.17470881, + 11.17426595, 11.17381301, 11.17334965, 11.17287556, 11.17239039, + 11.17189377, 11.17138534, 11.1708647 , 11.17033144, 11.16978515, + 11.16922538, 11.16865167, 11.16806353, 11.16746047, 11.16684195, + 11.16620744, 11.16555634, 11.16488807, 11.164202 , 11.16349746, + 11.16277377, 11.16203019, 11.16126599, 11.16048035, 11.15967244, + 11.1588414 , 11.15798629, 11.15710614, 11.15619995, 11.15526664, + 11.15430508, 11.15331408, 11.15229239, 11.15123869, 11.15015158, + 11.1490296 , 11.14787118, 11.14667467, 11.14543835, 11.14416036, + 11.14283876, 11.14147146, 11.14005627, 11.13859086, 11.13707274, + 11.13549928, 11.13386766, 11.13217489, 11.13041779, 11.12859294, + 11.1266967 , 11.12472519, 11.12267423, 11.12053937, 11.11831582, + 11.11599844, 11.11358172, 11.1110597 , 11.108426 , 11.10567371, + 11.10279538, 11.09978295, 11.0966277 , 11.09332019, 11.08985014, + 11.08620641, 11.08237686, 11.07834823, 11.07410609, 11.0696346 , + 11.06491645, 11.05993262, 11.05466219, 11.04908215, 11.04316706, + 11.0368888 , 11.03021621, 11.02311469, 11.01554571, 11.0074663 , + 10.99882841, 10.98957818, 10.9796551 , 10.96899105, 10.95750912, + 10.94512231, 10.93173196, 10.91722604, 10.9014771 , 10.88434051, + 10.86565166, 10.84522366, 10.82284743, 10.79828804, 10.77129148, + 10.74158012, 10.70888132, 10.67292048, 10.63345917, 10.5918398 , + 10.54672614, 10.49807324, 10.44589071, 10.39024504, 10.32740854, + 10.26731613, 10.21672145, 10.17928546, 10.1573275 , 10.15136993, + 10.15105741, 10.15095875, 10.15089447, 10.15084121, 10.15080929, + 10.15057589, 10.15188739, 10.14784724, 10.1529733 , 10.1501711 , + 10.15076255, 10.1506691 , 10.15062691, 10.15057603, 10.15051483, + 10.15043234, 10.14662993, 10.12838294, 10.09542737, 10.04609022, + 9.97678757, 9.88726669, 9.7921636 , 9.6914096 , 9.58462325, + 9.47150253, 9.35156441, 9.2244216 , 9.08940602, 8.94598789, + 8.79326392, 8.63052961, 8.45665465, 8.27013986, 8.06943749, + 7.85211392, 7.61428964, 7.34520491, 7.04057202, 6.71074938, + 6.36283553, 6.00831501, 5.67582072, 5.37708757, 5.09294781, + 4.81939841, 4.55399141, 4.29389247, 4.03558738, 3.76797394, + 3.49808153, 3.23711001, 2.99205349, 2.76983515, 2.58089777, + 2.41385684, 2.25340629, 2.10953542, 1.98586617, 1.88302975, + 1.8000942 , 1.73555286, 1.67996623, 1.63333139, 1.59668127, + 1.56800117, 1.54506361, 1.52645131, 1.51305388, 1.50364658, + 1.49718254, 1.49335652, 1.49215502, 1.49183767, 1.49161071, + 1.49149391, 1.49138601, 1.49130843, 1.49119418, 1.4909377 , + 1.48870087, 1.48330242, 1.47445542, 1.46215848, 1.44676861, + 1.42955875, 1.41109257, 1.39144269, 1.37065853, 1.34872149, + 1.32556352, 1.30110954, 1.27521425, 1.24783094, 1.21923956, + 1.19017537, 1.16326736, 1.14160655, 1.12566496, 1.11598862, + 1.11245385, 1.1123369 , 1.11157651, 1.10658059, 1.09746971, + 1.08468491, 1.06834359, 1.04890462, 1.03077803, 1.01442155, + 0.9996951 , 0.9864913 , 0.97472505, 0.96431784, 0.95524865, + 0.94743729, 0.94079212, 0.93529751, 0.93083517, 0.92728947, + 0.92454583, 0.9225759 , 0.92121771, 0.92035178, 0.91983259, + 0.91954061, 0.91931849, 0.91912384, 0.9188289 , 0.91860187, + 0.91836762, 0.91802886, 0.91742547, 0.91644245, 0.91490881, + 0.91271727, 0.9097741 , 0.90602554, 0.90135895, 0.89576623, + 0.88925476, 0.88182792, 0.87354713, 0.86454034, 0.85494523, + 0.84489503, 0.83457996, 0.82419509, 0.81390411, 0.80386618, + 0.79420722, 0.78504077, 0.77643306, 0.76844007, 0.76108034, + 0.75439037, 0.74835685, 0.74298492, 0.73826691, 0.73418615, + 0.73069615, 0.72773759, 0.72523909, 0.72312866, 0.72133875, + 0.71981133, 0.71849898, 0.71736346, 0.71637434, 0.71550729, + 0.71474277, 0.714065 , 0.71346109, 0.71292051, 0.71243452, + 0.71199585, 0.71159843, 0.71123713, 0.71090759, 0.71060611, + 0.71032953, 0.7100751 , 0.70984046, 0.70962357, 0.70942262, + 0.70923606, 0.70906251, 0.70890075, 0.70874972, 0.70860845, + 0.70847611, 0.70835193, 0.70823525, 0.70812544, 0.70802198, + 0.70792436, 0.70783214, 0.70774493, 0.70766234, 0.70758406, + 0.70750978, 0.70743923, 0.70737215, 0.70730831, 0.7072475 , + 0.70718953, 0.70713421, 0.70708139, 0.70703091, 0.70698263, + 0.70693643, 0.70689217, 0.70684976, 0.70680908, 0.70677004, + 0.70673255, 0.70669652, 0.70666189, 0.70662858, 0.70659651, + 0.70656563, 0.70653587, 0.70650718, 0.70647952, 0.70645282, + 0.70642704, 0.70640215, 0.70637809, 0.70635483, 0.70633234, + 0.70631058, 0.70628952, 0.70627326, 0.70626138, 0.70625358, + 0.70624971, 0. , 0. , 0. , 0. , + 0. , 0. ]), + 5) \ No newline at end of file diff --git a/src/calculate/optimize/fit_coeffs.py b/src/calculate/optimize/fit_coeffs.py index 4b5af1c..3371169 100644 --- a/src/calculate/optimize/fit_coeffs.py +++ b/src/calculate/optimize/fit_coeffs.py @@ -13,7 +13,10 @@ import itertools from calculate.convert_units import OoM, Bisymlog -from calculate.optimize.misc_fcns import penalized_loss_fcn, set_arrhenius_bnds, min_pos_system_value, max_pos_system_value +from calculate.optimize.misc_fcns import set_arrhenius_bnds, min_pos_system_value, max_pos_system_value +from calculate.optimize.adaptive_loss import adaptive_weights + +from calculate.mech_fcns import arrhenius_coefNames Ru = ct.gas_constant # Ru = 1.98720425864083 @@ -24,10 +27,11 @@ max_log_val = np.log10(max_pos_system_value) ln_k_max = np.log(1E60) # max_log_val -default_arrhenius_coefNames = ['activation_energy', 'pre_exponential_factor', 'temperature_exponent'] -default_Troe_coefNames = ['activation_energy_0', 'pre_exponential_factor_0', 'temperature_exponent_0', - 'activation_energy_inf', 'pre_exponential_factor_inf', 'temperature_exponent_inf', - 'A', 'T3', 'T1', 'T2'] +default_arrhenius_coefNames = arrhenius_coefNames + +falloff_coefNames = ['A', 'T3', 'T1', 'T2'] +default_Troe_coefNames = [f"{coefName}_{suffix}" for suffix in ["0", "inf"] for coefName in arrhenius_coefNames] +default_Troe_coefNames.extend(falloff_coefNames) troe_falloff_0 = [[0.6, 200, 600, 1200], # (0, 0, 0) [0.05, 1000, -2000, 3000], # (0, 1, 0) @@ -127,7 +131,7 @@ def ln_arrhenius_jac(T, *args): loss=loss) except: return - + if A_idx is not None: popt[A_idx] = np.exp(popt[A_idx]) @@ -425,10 +429,14 @@ def objective(self, x_fit, grad=np.array([]), obj_type='obj_sum', aug_lagrangian resid = ln_Troe(T, M, *x) - self.ln_k #resid = self.ln_Troe(T, *x) - self.ln_k - if obj_type == 'obj_sum': - obj_val = penalized_loss_fcn(resid, a=self.loss_alpha, c=self.loss_scale).sum() + resid = resid.flatten() + + if obj_type == 'obj_sum': + loss_weights, C, alpha = adaptive_weights(resid, weights=np.array([]), C_scalar=self.loss_scale, alpha=self.loss_alpha) + obj_val = np.sum(loss_weights*(resid**2)) elif obj_type == 'obj': - obj_val = penalized_loss_fcn(resid, a=self.loss_alpha, c=self.loss_scale) + loss_weights, C, alpha = adaptive_weights(resid, weights=np.array([]), C_scalar=self.loss_scale, alpha=self.loss_alpha) + obj_val = loss_weights*(resid**2) elif obj_type == 'resid': obj_val = resid @@ -927,22 +935,22 @@ def fit_generic(rates, T, P, X, rxnIdx, coefKeys, coefNames, is_falloff_limit, m coefNames = np.array(coefNames) bnds = np.array(bnds).copy() - if type(rxn) in [ct.ElementaryReaction, ct.ThreeBodyReaction]: + if type(rxn.rate) is ct.ArrheniusRate: # set x0 for all parameters - x0 = [mech.coeffs_bnds[rxnIdx]['rate'][coefName]['resetVal'] for coefName in mech.coeffs_bnds[rxnIdx]['rate']] + x0 = [mech.coeffs_bnds[rxnIdx]['rate'][coefName]['resetVal'] for coefName in default_arrhenius_coefNames] coeffs = fit_arrhenius(rates, T, x0=x0, coefNames=coefNames, bnds=bnds) - if type(rxn) is ct.ThreeBodyReaction and 'pre_exponential_factor' in coefNames: + if (rxn.reaction_type == "three-body") and ('pre_exponential_factor' in coefNames): A_idx = np.argwhere(coefNames == 'pre_exponential_factor')[0] coeffs[A_idx] = coeffs[A_idx]/mech.M(rxn) - elif type(rxn) in [ct.PlogReaction, ct.FalloffReaction]: + elif type(rxn.rate) in [ct.PlogRate, ct.FalloffRate, ct.TroeRate, ct.SriRate]: M = lambda T, P: mech.M(rxn, [T, P, X]) # get x0 for all parameters x0 = [] - for Initial_parameters in mech.coeffs_bnds[rxnIdx].values(): - for coef in Initial_parameters.values(): + for initial_parameters in mech.coeffs_bnds[rxnIdx].values(): + for coef in initial_parameters.values(): x0.append(coef['resetVal']) # set coefNames to be optimized diff --git a/src/calculate/optimize/fit_coeffs_pygmo.py b/src/calculate/optimize/fit_coeffs_pygmo.py index 1fc8716..3f8dd02 100644 --- a/src/calculate/optimize/fit_coeffs_pygmo.py +++ b/src/calculate/optimize/fit_coeffs_pygmo.py @@ -15,6 +15,8 @@ from calculate.convert_units import OoM from calculate.optimize.misc_fcns import penalized_loss_fcn, set_arrhenius_bnds +from calculate.mech_fcns import arrhenius_coefNames + Ru = ct.gas_constant # Ru = 1.98720425864083 @@ -26,10 +28,11 @@ max_log_val = np.log10(max_pos_system_value) ln_k_max = np.log(1E60) # max_log_val -default_arrhenius_coefNames = ['activation_energy', 'pre_exponential_factor', 'temperature_exponent'] -default_Troe_coefNames = ['activation_energy_0', 'pre_exponential_factor_0', 'temperature_exponent_0', - 'activation_energy_inf', 'pre_exponential_factor_inf', 'temperature_exponent_inf', - 'A', 'T3', 'T1', 'T2'] +default_arrhenius_coefNames = arrhenius_coefNames + +falloff_coefNames = ['A', 'T3', 'T1', 'T2'] +default_Troe_coefNames = [f"{coefName}_{suffix}" for suffix in ["0", "inf"] for coefName in arrhenius_coefNames] +default_Troe_coefNames.extend(falloff_coefNames) troe_falloff_0 = [[1.0, 1E-30, 1E-30, 1500], # (0, 0, 0) [0.6, 200, 600, 1200], # (0, 0, 0) @@ -885,16 +888,16 @@ def fit_generic(rates, T, P, X, rxnIdx, coefKeys, coefNames, is_falloff_limit, m coefNames = np.array(coefNames) bnds = np.array(bnds).copy() - if type(rxn) in [ct.ElementaryReaction, ct.ThreeBodyReaction]: + if type(rxn.rate) == ct.ArrheniusRate: # set x0 for all parameters x0 = [mech.coeffs_bnds[rxnIdx]['rate'][coefName]['resetVal'] for coefName in mech.coeffs_bnds[rxnIdx]['rate']] coeffs = fit_arrhenius(rates, T, x0=x0, coefNames=coefNames, bnds=bnds) - if type(rxn) is ct.ThreeBodyReaction and 'pre_exponential_factor' in coefNames: + if (rxn.reaction_type == "three-body") and ('pre_exponential_factor' in coefNames): A_idx = np.argwhere(coefNames == 'pre_exponential_factor')[0] coeffs[A_idx] = coeffs[A_idx]/mech.M(rxn) - elif type(rxn) in [ct.PlogReaction, ct.FalloffReaction]: + elif type(rxn.rate) in [ct.PlogRate, ct.FalloffRate]: M = lambda T, P: mech.M(rxn, [T, P, X]) # get x0 for all parameters diff --git a/src/calculate/optimize/fit_fcn.py b/src/calculate/optimize/fit_fcn.py index e32db8d..50f837e 100644 --- a/src/calculate/optimize/fit_fcn.py +++ b/src/calculate/optimize/fit_fcn.py @@ -1,5 +1,5 @@ # This file is part of Frhodo. Copyright © 2020, UChicago Argonne, LLC -# and licensed under BSD-3-Clause. See License.txt in the top-level +# and licensed under BSD-3-Clause. See License.txt in the top-level # directory for license and copyright information. import io, contextlib @@ -17,18 +17,23 @@ from calculate.optimize.CheKiPEUQ_from_Frhodo import CheKiPEUQ_Frhodo_interface mpMech = {} + + def initialize_parallel_worker(mech_dict, species_dict, coeffs, coeffs_bnds, rate_bnds): - mpMech['obj'] = mech = Chemical_Mechanism() + mpMech["obj"] = mech = Chemical_Mechanism() # hide mechanism loading problems because they will already have been seen with contextlib.redirect_stderr(io.StringIO()): with contextlib.redirect_stdout(io.StringIO()): - mech.set_mechanism(mech_dict, species_dict) # load mechanism from yaml text in memory + mech.set_mechanism( + mech_dict, species_dict + ) # load mechanism from yaml text in memory mech.coeffs = deepcopy(coeffs) mech.coeffs_bnds = deepcopy(coeffs_bnds) mech.rate_bnds = deepcopy(rate_bnds) + def rescale_loss_fcn(x, loss, x_outlier=None, weights=[]): x = x.copy() weights = weights.copy() @@ -48,412 +53,595 @@ def rescale_loss_fcn(x, loss, x_outlier=None, weights=[]): else: x_q1, x_q3 = x.min(), x.max() loss_q1, loss_q3 = loss_trimmed.min(), loss_trimmed.max() - - if x_q1 != x_q3 and loss_q1 != loss_q3: # prevent divide by zero if values end up the same - loss_scaled = (x_q3 - x_q1)/(loss_q3 - loss_q1)*(loss - loss_q1) + x_q1 + + if ( + x_q1 != x_q3 and loss_q1 != loss_q3 + ): # prevent divide by zero if values end up the same + loss_scaled = (x_q3 - x_q1) / (loss_q3 - loss_q1) * (loss - loss_q1) + x_q1 else: loss_scaled = loss return loss_scaled + def update_mech_coef_opt(mech, coef_opt, x): mech_changed = False for i, idxDict in enumerate(coef_opt): - rxnIdx, coefName = idxDict['rxnIdx'], idxDict['coefName'] - coeffs_key = idxDict['key']['coeffs'] - if mech.coeffs[rxnIdx][coeffs_key][coefName] != x[i]: # limits mech changes. Should increase speed a little - if type(mech.coeffs[rxnIdx][coeffs_key]) is tuple: # don't know why but sometimes reverts to tuple - mech.coeffs[rxnIdx][coeffs_key] = list(mech.coeffs[rxnIdx][coeffs_key]) - + rxnIdx, coefName = idxDict["rxnIdx"], idxDict["coefName"] + coeffs_key = idxDict["key"]["coeffs"] + if ( + mech.coeffs[rxnIdx][coeffs_key][coefName] != x[i] + ): # limits mech changes. Should increase speed a little + if ( + type(mech.coeffs[rxnIdx][coeffs_key]) is tuple + ): # don't know why but sometimes reverts to tuple + mech.coeffs[rxnIdx][coeffs_key] = list(mech.coeffs[rxnIdx][coeffs_key]) + mech_changed = True mech.coeffs[rxnIdx][coeffs_key][coefName] = x[i] if mech_changed: mech.modify_reactions(mech.coeffs) # Update mechanism with new coefficients -def calculate_residuals(args_list): - def resid_func(t_offset, t_adjust, t_sim, obs_sim, t_exp, obs_exp, weights, obs_bounds=[], - loss_alpha=2, loss_c=1, loss_penalty=True, scale='Linear', - bisymlog_scaling_factor=1.0, DoF=1, opt_type='Residual', verbose=False): +def calculate_residuals(args_list): + def resid_func( + t_offset, + t_adjust, + t_sim, + obs_sim, + t_exp, + obs_exp, + weights, + obs_bounds=[], + loss_alpha=2, + loss_c=1, + loss_penalty=True, + scale="Linear", + bisymlog_scaling_factor=1.0, + DoF=1, + opt_type="Residual", + verbose=False, + ): def calc_exp_bounds(t_sim, t_exp): - t_bounds = [max([t_sim[0], t_exp[0]])] # Largest initial time in SIM and Exp - t_bounds.append(min([t_sim[-1], t_exp[-1]])) # Smallest final time in SIM and Exp + t_bounds = [ + max([t_sim[0], t_exp[0]]) + ] # Largest initial time in SIM and Exp + t_bounds.append( + min([t_sim[-1], t_exp[-1]]) + ) # Smallest final time in SIM and Exp # Values within t_bounds - exp_bounds = np.where(np.logical_and((t_exp >= t_bounds[0]),(t_exp <= t_bounds[1])))[0] - + exp_bounds = np.where( + np.logical_and((t_exp >= t_bounds[0]), (t_exp <= t_bounds[1])) + )[0] + return exp_bounds # Compare SIM Density Grad vs. Experimental t_sim_shifted = t_sim + t_offset + t_adjust exp_bounds = calc_exp_bounds(t_sim_shifted, t_exp) - t_exp, obs_exp, weights = t_exp[exp_bounds], obs_exp[exp_bounds], weights[exp_bounds] - if opt_type == 'Bayesian': + t_exp, obs_exp, weights = ( + t_exp[exp_bounds], + obs_exp[exp_bounds], + weights[exp_bounds], + ) + if opt_type == "Bayesian": obs_bounds = obs_bounds[exp_bounds] - + f_interp = CubicSpline(t_sim.flatten(), obs_sim.flatten()) t_exp_shifted = t_exp - t_offset - t_adjust obs_sim_interp = f_interp(t_exp_shifted) - - if scale == 'Linear': - resid = np.subtract(obs_exp, obs_sim_interp) - - elif scale == 'Log': - ind = np.argwhere(((obs_exp!=0.0)&(obs_sim_interp!=0.0))) + + if scale == "Linear": + resid = np.subtract(obs_exp, obs_sim_interp) + + elif scale == "Log": + ind = np.argwhere(((obs_exp != 0.0) & (obs_sim_interp != 0.0))) exp_bounds = exp_bounds[ind] weights = weights[ind].flatten() - + m = np.ones_like(obs_exp[ind]) i_g = obs_exp[ind] >= obs_sim_interp[ind] i_l = obs_exp[ind] < obs_sim_interp[ind] m[i_g] = np.divide(obs_exp[ind][i_g], obs_sim_interp[ind][i_g]) m[i_l] = np.divide(obs_sim_interp[ind][i_l], obs_exp[ind][i_l]) resid = np.log10(np.abs(m)).flatten() - if verbose and opt_type == 'Bayesian': - obs_exp = np.log10(np.abs(obs_exp[ind])).squeeze() # squeeze to remove extra dim + if verbose and opt_type == "Bayesian": + obs_exp = np.log10( + np.abs(obs_exp[ind]) + ).squeeze() # squeeze to remove extra dim obs_sim_interp = np.log10(np.abs(obs_sim_interp[ind])).squeeze() - obs_bounds = np.log10(np.abs(obs_bounds[ind])).squeeze() + obs_bounds = np.log10(np.abs(obs_bounds[ind])).squeeze() - elif scale == 'Bisymlog': + elif scale == "Bisymlog": bisymlog = Bisymlog(C=None, scaling_factor=bisymlog_scaling_factor) bisymlog.set_C_heuristically(obs_exp) obs_exp_bisymlog = bisymlog.transform(obs_exp) obs_sim_interp_bisymlog = bisymlog.transform(obs_sim_interp) - resid = np.subtract(obs_exp_bisymlog, obs_sim_interp_bisymlog) - if verbose and opt_type == 'Bayesian': + resid = np.subtract(obs_exp_bisymlog, obs_sim_interp_bisymlog) + if verbose and opt_type == "Bayesian": obs_exp = obs_exp_bisymlog obs_sim_interp = obs_sim_interp_bisymlog obs_bounds = bisymlog.transform(obs_bounds) # THIS NEEDS TO BE CHECKED - + resid_outlier = outlier(resid, c=loss_c, weights=weights) - loss = penalized_loss_fcn(resid, a=loss_alpha, c=resid_outlier, use_penalty=loss_penalty) - - #loss = rescale_loss_fcn(np.abs(resid), loss, resid_outlier, weights) + loss = penalized_loss_fcn( + resid, a=loss_alpha, c=resid_outlier, use_penalty=loss_penalty + ) + + # loss = rescale_loss_fcn(np.abs(resid), loss, resid_outlier, weights) - loss_sqr = (loss**2)*weights + loss_sqr = (loss**2) * weights wgt_sum = weights.sum() N = wgt_sum - DoF if N <= 0: N = wgt_sum - stderr_sqr = loss_sqr.sum()*wgt_sum/N - chi_sqr = loss_sqr/stderr_sqr - std_resid = chi_sqr**(0.5) - #loss_scalar = (chi_sqr*weights).sum() + stderr_sqr = loss_sqr.sum() * wgt_sum / N + chi_sqr = loss_sqr / stderr_sqr + std_resid = chi_sqr ** (0.5) + # loss_scalar = (chi_sqr*weights).sum() loss_scalar = std_resid.sum() - #loss_scalar = np.average(std_resid, weights=weights) - #loss_scalar = weighted_quantile(std_resid, 0.5, weights=weights) # median value - - if verbose: - output = {'chi_sqr': chi_sqr, 'resid': resid, 'resid_outlier': resid_outlier, - 'loss': loss_scalar, 'weights': weights, 'obs_sim_interp': obs_sim_interp, - 'obs_exp': obs_exp} - - if opt_type == 'Bayesian': # need to calculate aggregate weights to reduce outliers in bayesian - SSE = penalized_loss_fcn(resid/resid_outlier, use_penalty=False) - #SSE = penalized_loss_fcn(resid) - #SSE = rescale_loss_fcn(np.abs(resid), SSE, resid_outlier, weights) - loss_weights = loss/SSE # comparison is between selected loss fcn and SSE (L2 loss) - output['aggregate_weights'] = weights*loss_weights - output['obs_bounds'] = obs_bounds + # loss_scalar = np.average(std_resid, weights=weights) + # loss_scalar = weighted_quantile(std_resid, 0.5, weights=weights) # median value + + if verbose: + output = { + "chi_sqr": chi_sqr, + "resid": resid, + "resid_outlier": resid_outlier, + "loss": loss_scalar, + "weights": weights, + "obs_sim_interp": obs_sim_interp, + "obs_exp": obs_exp, + } + + if ( + opt_type == "Bayesian" + ): # need to calculate aggregate weights to reduce outliers in bayesian + SSE = penalized_loss_fcn(resid / resid_outlier, use_penalty=False) + # SSE = penalized_loss_fcn(resid) + # SSE = rescale_loss_fcn(np.abs(resid), SSE, resid_outlier, weights) + loss_weights = ( + loss / SSE + ) # comparison is between selected loss fcn and SSE (L2 loss) + output["aggregate_weights"] = weights * loss_weights + output["obs_bounds"] = obs_bounds return output - - else: # needs to return single value for optimization + + else: # needs to return single value for optimization return loss_scalar - + def calc_density(x, data, dim=1): stdev = np.std(data) [q1, q3] = weighted_quantile(data, [0.25, 0.75]) - iqr = q3 - q1 # interquartile range - A = np.min([stdev, iqr/1.34])/stdev # bandwidth is multiplied by std of sample - bw = 0.9*A*len(data)**(-1./(dim+4)) + iqr = q3 - q1 # interquartile range + A = ( + np.min([stdev, iqr / 1.34]) / stdev + ) # bandwidth is multiplied by std of sample + bw = 0.9 * A * len(data) ** (-1.0 / (dim + 4)) return stats.gaussian_kde(data, bw_method=bw)(x) - + var, coef_opt, x, shock = args_list - mech = mpMech['obj'] - + mech = mpMech["obj"] + # Optimization Begins, update mechanism update_mech_coef_opt(mech, coef_opt, x) - T_reac, P_reac, mix = shock['T_reactor'], shock['P_reactor'], shock['thermo_mix'] - - SIM_kwargs = {'u_reac': shock['u2'], 'rho1': shock['rho1'], 'observable': shock['observable'], - 't_lab_save': None, 'sim_int_f': var['sim_interp_factor'], - 'ODE_solver': var['ode_solver'], 'rtol': var['ode_rtol'], 'atol': var['ode_atol']} - - if '0d Reactor' in var['name']: - SIM_kwargs['solve_energy'] = var['solve_energy'] - SIM_kwargs['frozen_comp'] = var['frozen_comp'] - - - SIM, verbose = mech.run(var['name'], var['t_end'], T_reac, P_reac, mix, **SIM_kwargs) - ind_var, obs_sim = SIM.independent_var[:,None], SIM.observable[:,None] - - weights = shock['weights_trim'] - obs_exp = shock['exp_data_trim'] + T_reac, P_reac, mix = shock["T_reactor"], shock["P_reactor"], shock["thermo_mix"] + + SIM_kwargs = { + "u_reac": shock["u2"], + "rho1": shock["rho1"], + "observable": shock["observable"], + "t_lab_save": None, + "sim_int_f": var["sim_interp_factor"], + "ODE_solver": var["ode_solver"], + "rtol": var["ode_rtol"], + "atol": var["ode_atol"], + } + + if "0d Reactor" in var["name"]: + SIM_kwargs["solve_energy"] = var["solve_energy"] + SIM_kwargs["frozen_comp"] = var["frozen_comp"] + + SIM, verbose = mech.run( + var["name"], var["t_end"], T_reac, P_reac, mix, **SIM_kwargs + ) + ind_var, obs_sim = SIM.independent_var[:, None], SIM.observable[:, None] + + weights = shock["weights_trim"] + obs_exp = shock["exp_data_trim"] obs_bounds = [] - if var['obj_fcn_type'] == 'Bayesian': - obs_bounds = shock['abs_uncertainties_trim'] - - if not np.any(var['t_unc']): + if var["obj_fcn_type"] == "Bayesian": + obs_bounds = shock["abs_uncertainties_trim"] + + if not np.any(var["t_unc"]): t_unc = 0 else: - t_unc_OoM = np.mean(OoM(var['t_unc'])) # Do at higher level in code? (computationally efficient) - # calculate time adjust with mse (loss_alpha = 2, loss_c =1) - time_adj_func = lambda t_adjust: resid_func(shock['opt_time_offset'], t_adjust*10**t_unc_OoM, - ind_var, obs_sim, obs_exp[:,0], obs_exp[:,1], weights, obs_bounds, scale=var['scale'], - bisymlog_scaling_factor= var['bisymlog_scaling_factor'], DoF=len(coef_opt), - opt_type=var['obj_fcn_type']) - - res = minimize_scalar(time_adj_func, bounds=var['t_unc']/10**t_unc_OoM, method='bounded') - t_unc = res.x*10**t_unc_OoM + t_unc_OoM = np.mean( + OoM(var["t_unc"]) + ) # Do at higher level in code? (computationally efficient) + # calculate time adjust with mse (loss_alpha = 2, loss_c =1) + time_adj_func = lambda t_adjust: resid_func( + shock["opt_time_offset"], + t_adjust * 10**t_unc_OoM, + ind_var, + obs_sim, + obs_exp[:, 0], + obs_exp[:, 1], + weights, + obs_bounds, + scale=var["scale"], + bisymlog_scaling_factor=var["bisymlog_scaling_factor"], + DoF=len(coef_opt), + opt_type=var["obj_fcn_type"], + ) + + res = minimize_scalar( + time_adj_func, bounds=var["t_unc"] / 10**t_unc_OoM, method="bounded" + ) + t_unc = res.x * 10**t_unc_OoM # calculate loss shape function (alpha) if it is set to adaptive - loss_alpha = var['loss_alpha'] + loss_alpha = var["loss_alpha"] if loss_alpha == 3.0: - loss_alpha_fcn = lambda alpha: resid_func(shock['opt_time_offset'], t_unc, - ind_var, obs_sim, obs_exp[:,0], obs_exp[:,1], weights, obs_bounds, - loss_alpha=alpha, loss_c=var['loss_c'], loss_penalty=True, scale=var['scale'], - bisymlog_scaling_factor= var['bisymlog_scaling_factor'], DoF=len(coef_opt), - opt_type=var['obj_fcn_type']) - - res = minimize_scalar(loss_alpha_fcn, bounds=[-100, 2], method='bounded') + loss_alpha_fcn = lambda alpha: resid_func( + shock["opt_time_offset"], + t_unc, + ind_var, + obs_sim, + obs_exp[:, 0], + obs_exp[:, 1], + weights, + obs_bounds, + loss_alpha=alpha, + loss_c=var["loss_c"], + loss_penalty=True, + scale=var["scale"], + bisymlog_scaling_factor=var["bisymlog_scaling_factor"], + DoF=len(coef_opt), + opt_type=var["obj_fcn_type"], + ) + + res = minimize_scalar(loss_alpha_fcn, bounds=[-100, 2], method="bounded") loss_alpha = res.x - - if var['obj_fcn_type'] == 'Residual': + + if var["obj_fcn_type"] == "Residual": loss_penalty = True else: loss_penalty = False - output = resid_func(shock['opt_time_offset'], t_unc, ind_var, obs_sim, obs_exp[:,0], obs_exp[:,1], - weights, obs_bounds, loss_alpha=loss_alpha, loss_c=var['loss_c'], - loss_penalty=loss_penalty, scale=var['scale'], - bisymlog_scaling_factor= var['bisymlog_scaling_factor'], - DoF=len(coef_opt), opt_type=var['obj_fcn_type'], verbose=True) - - output['shock'] = shock - output['independent_var'] = ind_var - output['observable'] = obs_sim - output['t_unc'] = t_unc - output['loss_alpha'] = loss_alpha + output = resid_func( + shock["opt_time_offset"], + t_unc, + ind_var, + obs_sim, + obs_exp[:, 0], + obs_exp[:, 1], + weights, + obs_bounds, + loss_alpha=loss_alpha, + loss_c=var["loss_c"], + loss_penalty=loss_penalty, + scale=var["scale"], + bisymlog_scaling_factor=var["bisymlog_scaling_factor"], + DoF=len(coef_opt), + opt_type=var["obj_fcn_type"], + verbose=True, + ) + + output["shock"] = shock + output["independent_var"] = ind_var + output["observable"] = obs_sim + output["t_unc"] = t_unc + output["loss_alpha"] = loss_alpha plot_stats = True if plot_stats: - x = np.linspace(output['resid'].min(), output['resid'].max(), 300) - density = calc_density(x, output['resid'], dim=1) #kernel density estimation - output['KDE'] = np.column_stack((x, density)) + x = np.linspace(output["resid"].min(), output["resid"].max(), 300) + density = calc_density(x, output["resid"], dim=1) # kernel density estimation + output["KDE"] = np.column_stack((x, density)) return output + # Using optimization vs least squares curve fit because y_range's change if time_offset != 0 class Fit_Fun: def __init__(self, input_dict): - self.parent = input_dict['parent'] - self.shocks2run = input_dict['shocks2run'] + self.parent = input_dict["parent"] + self.shocks2run = input_dict["shocks2run"] self.data = self.parent.series.shock - self.coef_opt = input_dict['coef_opt'] - self.rxn_coef_opt = input_dict['rxn_coef_opt'] - self.x0 = input_dict['rxn_rate_opt']['x0'] - self.mech = input_dict['mech'] + self.coef_opt = input_dict["coef_opt"] + self.rxn_coef_opt = input_dict["rxn_coef_opt"] + self.x0 = input_dict["rxn_rate_opt"]["x0"] + self.mech = input_dict["mech"] self.var = self.parent.var - self.t_unc = (-self.var['time_unc'], self.var['time_unc']) - - self.opt_type = 'local' # this is updated outside of the class - + self.t_unc = (-self.var["time_unc"], self.var["time_unc"]) + + self.opt_type = "local" # this is updated outside of the class + self.dist = self.parent.optimize.dist - self.opt_settings = {'obj_fcn_type': self.parent.optimization_settings.get('obj_fcn', 'type'), - 'scale': self.parent.optimization_settings.get('obj_fcn', 'scale'), - 'bisymlog_scaling_factor': self.parent.plot.signal.bisymlog.scaling_factor, - 'loss_alpha': self.parent.optimization_settings.get('obj_fcn', 'alpha'), - 'loss_c': self.parent.optimization_settings.get('obj_fcn', 'c'), - 'bayes_dist_type': self.parent.optimization_settings.get('obj_fcn', 'bayes_dist_type'), - 'bayes_unc_sigma': self.parent.optimization_settings.get('obj_fcn', 'bayes_unc_sigma')} - - if 'multiprocessing' in input_dict: - self.multiprocessing = input_dict['multiprocessing'] - - if 'pool' in input_dict: - self.pool = input_dict['pool'] + self.opt_settings = { + "obj_fcn_type": self.parent.optimization_settings.get("obj_fcn", "type"), + "scale": self.parent.optimization_settings.get("obj_fcn", "scale"), + "bisymlog_scaling_factor": self.parent.plot.signal.bisymlog.scaling_factor, + "loss_alpha": self.parent.optimization_settings.get("obj_fcn", "alpha"), + "loss_c": self.parent.optimization_settings.get("obj_fcn", "c"), + "bayes_dist_type": self.parent.optimization_settings.get( + "obj_fcn", "bayes_dist_type" + ), + "bayes_unc_sigma": self.parent.optimization_settings.get( + "obj_fcn", "bayes_unc_sigma" + ), + } + + if "multiprocessing" in input_dict: + self.multiprocessing = input_dict["multiprocessing"] + + if "pool" in input_dict: + self.pool = input_dict["pool"] else: self.multiprocessing = False - - self.signals = input_dict['signals'] - - self.i = 0 + + self.signals = input_dict["signals"] + + self.i = 0 self.__abort = False - if self.opt_settings['obj_fcn_type'] == 'Bayesian': # initialize Bayesian_dictionary if Bayesian selected - input_dict['opt_settings'] = self.opt_settings + if ( + self.opt_settings["obj_fcn_type"] == "Bayesian" + ): # initialize Bayesian_dictionary if Bayesian selected + input_dict["opt_settings"] = self.opt_settings self.CheKiPEUQ_Frhodo_interface = CheKiPEUQ_Frhodo_interface(input_dict) - - def __call__(self, s, optimizing=True): + + def __call__(self, s, optimizing=True): def append_output(output_dict, calc_resid_output): for key in calc_resid_output: if key not in output_dict: output_dict[key] = [] - + output_dict[key].append(calc_resid_output[key]) - + return output_dict - - if self.__abort: - raise Exception('Optimization terminated by user') - self.signals.log.emit('\nOptimization aborted') + + if self.__abort: + raise Exception("Optimization terminated by user") + self.signals.log.emit("\nOptimization aborted") return np.nan - + # Convert to mech values log_opt_rates = s + self.x0 x = self.fit_all_coeffs(np.exp(log_opt_rates)) - if x is None: + if x is None: return np.inf # Run Simulations output_dict = {} - - var_dict = {key: val for key, val in self.var['reactor'].items()} - var_dict['t_unc'] = self.t_unc + + var_dict = {key: val for key, val in self.var["reactor"].items()} + var_dict["t_unc"] = self.t_unc var_dict.update(self.opt_settings) - + display_ind_var = None display_observable = None - + if self.multiprocessing and len(self.shocks2run) > 1: - args_list = ((var_dict, self.coef_opt, x, shock) for shock in self.shocks2run) + args_list = ( + (var_dict, self.coef_opt, x, shock) for shock in self.shocks2run + ) calc_resid_outputs = self.pool.map(calculate_residuals, args_list) for calc_resid_output, shock in zip(calc_resid_outputs, self.shocks2run): append_output(output_dict, calc_resid_output) if shock is self.parent.display_shock: - display_ind_var = calc_resid_output['independent_var'] - display_observable = calc_resid_output['observable'] + display_ind_var = calc_resid_output["independent_var"] + display_observable = calc_resid_output["observable"] else: - mpMech['obj'] = self.mech - + mpMech["obj"] = self.mech + for shock in self.shocks2run: args_list = (var_dict, self.coef_opt, x, shock) calc_resid_output = calculate_residuals(args_list) append_output(output_dict, calc_resid_output) if shock is self.parent.display_shock: - display_ind_var = calc_resid_output['independent_var'] - display_observable = calc_resid_output['observable'] - - loss_resid = np.array(output_dict['loss']) - exp_loss_alpha = np.array(output_dict['loss_alpha']) + display_ind_var = calc_resid_output["independent_var"] + display_observable = calc_resid_output["observable"] - loss_alpha = self.opt_settings['loss_alpha'] + loss_resid = np.array(output_dict["loss"]) + exp_loss_alpha = np.array(output_dict["loss_alpha"]) + + loss_alpha = self.opt_settings["loss_alpha"] if loss_alpha == 3.0: if np.size(loss_resid) <= 2: # optimizing only a few experiments, use SSE loss_alpha = 2.0 - - else: # alpha is based on residual loss function, not great, but it's super slow otherwise - loss_alpha_fcn = lambda alpha: self.calculate_obj_fcn(x, loss_resid, alpha, log_opt_rates, output_dict, obj_fcn_type='Residual') - - res = minimize_scalar(loss_alpha_fcn, bounds=[-100, 2], method='bounded') + + else: # alpha is based on residual loss function, not great, but it's super slow otherwise + loss_alpha_fcn = lambda alpha: self.calculate_obj_fcn( + x, + loss_resid, + alpha, + log_opt_rates, + output_dict, + obj_fcn_type="Residual", + ) + + res = minimize_scalar( + loss_alpha_fcn, bounds=[-100, 2], method="bounded" + ) loss_alpha = res.x # testing loss alphas # print([loss_alpha, *exp_loss_alpha]) - obj_fcn = self.calculate_obj_fcn(x, loss_resid, loss_alpha, log_opt_rates, output_dict, obj_fcn_type=self.opt_settings['obj_fcn_type']) + obj_fcn = self.calculate_obj_fcn( + x, + loss_resid, + loss_alpha, + log_opt_rates, + output_dict, + obj_fcn_type=self.opt_settings["obj_fcn_type"], + ) # For updating self.i += 1 - if not optimizing or self.i % 1 == 0:#5 == 0: # updates plot every 5 - if obj_fcn == 0 and self.opt_settings['obj_fcn_type'] != 'Bayesian': + if not optimizing or self.i % 1 == 0: # 5 == 0: # updates plot every 5 + if obj_fcn == 0 and self.opt_settings["obj_fcn_type"] != "Bayesian": obj_fcn = np.inf - - stat_plot = {'shocks2run': self.shocks2run, 'resid': output_dict['resid'], - 'resid_outlier': self.loss_outlier, 'weights': output_dict['weights']} - - if 'KDE' in output_dict: - stat_plot['KDE'] = output_dict['KDE'] - allResid = np.concatenate(output_dict['resid'], axis=0) - - stat_plot['fit_result'] = fitres = self.dist.fit(allResid) - stat_plot['QQ'] = [] - for resid in stat_plot['resid']: - QQ = stats.probplot(resid, sparams=fitres, dist=self.dist, fit=False) + + stat_plot = { + "shocks2run": self.shocks2run, + "resid": output_dict["resid"], + "resid_outlier": self.loss_outlier, + "weights": output_dict["weights"], + } + + if "KDE" in output_dict: + stat_plot["KDE"] = output_dict["KDE"] + allResid = np.concatenate(output_dict["resid"], axis=0) + + stat_plot["fit_result"] = fitres = self.dist.fit(allResid) + stat_plot["QQ"] = [] + for resid in stat_plot["resid"]: + QQ = stats.probplot( + resid, sparams=fitres, dist=self.dist, fit=False + ) QQ = np.array(QQ).T - stat_plot['QQ'].append(QQ) - - update = {'type': self.opt_type, 'i': self.i, - 'obj_fcn': obj_fcn, 'stat_plot': stat_plot, - 's': s, 'x': x, 'coef_opt': self.coef_opt, - 'ind_var': display_ind_var, 'observable': display_observable} - + stat_plot["QQ"].append(QQ) + + update = { + "type": self.opt_type, + "i": self.i, + "obj_fcn": obj_fcn, + "stat_plot": stat_plot, + "s": s, + "x": x, + "coef_opt": self.coef_opt, + "ind_var": display_ind_var, + "observable": display_observable, + } + self.signals.update.emit(update) if optimizing: return obj_fcn else: - return obj_fcn, x, output_dict['shock'] - - def calculate_obj_fcn(self, x, loss_resid, alpha, log_opt_rates, output_dict, obj_fcn_type='Residual', loss_outlier=0): + return obj_fcn, x, output_dict["shock"] + + def calculate_obj_fcn( + self, + x, + loss_resid, + alpha, + log_opt_rates, + output_dict, + obj_fcn_type="Residual", + loss_outlier=0, + ): if np.size(loss_resid) == 1: # optimizing single experiment loss_outlier = 0 loss_exp = loss_resid - else: # optimizing multiple experiments + else: # optimizing multiple experiments loss_min = loss_resid.min() - loss_outlier = outlier(loss_resid, c=self.opt_settings['loss_c']) - - if obj_fcn_type == 'Residual': - loss_exp = penalized_loss_fcn(loss_resid-loss_min, a=alpha, c=loss_outlier) - else: # otherwise do not include penalty for Bayesian - loss_exp = penalized_loss_fcn(loss_resid-loss_min, a=alpha, c=loss_outlier, use_penalty=False) - #loss_exp = rescale_loss_fcn(loss_resid, loss_exp) - + loss_outlier = outlier(loss_resid, c=self.opt_settings["loss_c"]) + + if obj_fcn_type == "Residual": + loss_exp = penalized_loss_fcn( + loss_resid - loss_min, a=alpha, c=loss_outlier + ) + else: # otherwise do not include penalty for Bayesian + loss_exp = penalized_loss_fcn( + loss_resid - loss_min, a=alpha, c=loss_outlier, use_penalty=False + ) + # loss_exp = rescale_loss_fcn(loss_resid, loss_exp) + self.loss_outlier = loss_outlier - if obj_fcn_type == 'Residual': + if obj_fcn_type == "Residual": if np.size(loss_resid) == 1: # optimizing single experiment obj_fcn = loss_exp[0] else: loss_exp = loss_exp - loss_exp.min() + loss_min - #obj_fcn = np.median(loss_exp) + # obj_fcn = np.median(loss_exp) obj_fcn = np.average(loss_exp) - elif obj_fcn_type == 'Bayesian': + elif obj_fcn_type == "Bayesian": if np.size(loss_resid) == 1: # optimizing single experiment - Bayesian_weights = np.array(output_dict['aggregate_weights'], dtype=object).flatten() + Bayesian_weights = np.array( + output_dict["aggregate_weights"], dtype=object + ).flatten() else: loss_exp = rescale_loss_fcn(loss_resid, loss_exp) - aggregate_weights = np.array(output_dict['aggregate_weights'], dtype=object) + aggregate_weights = np.array( + output_dict["aggregate_weights"], dtype=object + ) SSE = penalized_loss_fcn(loss_resid, mu=loss_min, use_penalty=False) SSE = rescale_loss_fcn(loss_resid, SSE) - exp_loss_weights = loss_exp/SSE # comparison is between selected loss fcn and SSE (L2 loss) - Bayesian_weights = np.concatenate(aggregate_weights.T*exp_loss_weights, axis=0).flatten() - + exp_loss_weights = ( + loss_exp / SSE + ) # comparison is between selected loss fcn and SSE (L2 loss) + Bayesian_weights = np.concatenate( + aggregate_weights.T * exp_loss_weights, axis=0 + ).flatten() + # need to normalize weight values between iterations - Bayesian_weights = Bayesian_weights/Bayesian_weights.sum() + Bayesian_weights = Bayesian_weights / Bayesian_weights.sum() - CheKiPEUQ_eval_dict = {'log_opt_rates': log_opt_rates, 'x': x, 'output_dict': output_dict, - 'bayesian_weights': Bayesian_weights, 'iteration_num': self.i} + CheKiPEUQ_eval_dict = { + "log_opt_rates": log_opt_rates, + "x": x, + "output_dict": output_dict, + "bayesian_weights": Bayesian_weights, + "iteration_num": self.i, + } obj_fcn = self.CheKiPEUQ_Frhodo_interface.evaluate(CheKiPEUQ_eval_dict) return obj_fcn - def fit_all_coeffs(self, all_rates): + def fit_all_coeffs(self, all_rates): coeffs = [] i = 0 for rxn_coef in self.rxn_coef_opt: - rxnIdx = rxn_coef['rxnIdx'] - T, P, X = rxn_coef['T'], rxn_coef['P'], rxn_coef['X'] - coef_bnds = [rxn_coef['coef_bnds']['lower'], rxn_coef['coef_bnds']['upper']] - rxn_rates = all_rates[i:i+len(T)] + rxnIdx = rxn_coef["rxnIdx"] + T, P, X = rxn_coef["T"], rxn_coef["P"], rxn_coef["X"] + coef_bnds = [rxn_coef["coef_bnds"]["lower"], rxn_coef["coef_bnds"]["upper"]] + rxn_rates = all_rates[i : i + len(T)] if len(coeffs) == 0: - coeffs = fit_coeffs(rxn_rates, T, P, X, rxnIdx, rxn_coef['key'], rxn_coef['coefName'], - rxn_coef['is_falloff_limit'], coef_bnds, self.mech, self.pool) + coeffs = fit_coeffs( + rxn_rates, + T, + P, + X, + rxnIdx, + rxn_coef["key"], + rxn_coef["coefName"], + rxn_coef["is_falloff_limit"], + coef_bnds, + self.mech, + self.pool, + ) if coeffs is None: return else: - coeffs_append = fit_coeffs(rxn_rates, T, P, X, rxnIdx, rxn_coef['key'], rxn_coef['coefName'], - rxn_coef['is_falloff_limit'], coef_bnds, self.mech, self.pool) + coeffs_append = fit_coeffs( + rxn_rates, + T, + P, + X, + rxnIdx, + rxn_coef["key"], + rxn_coef["coefName"], + rxn_coef["is_falloff_limit"], + coef_bnds, + self.mech, + self.pool, + ) if coeffs_append is None: return coeffs = np.append(coeffs, coeffs_append) - + i += len(T) - return coeffs \ No newline at end of file + return coeffs diff --git a/src/calculate/optimize/mech_optimize.py b/src/calculate/optimize/mech_optimize.py index f15a6d0..57847ef 100644 --- a/src/calculate/optimize/mech_optimize.py +++ b/src/calculate/optimize/mech_optimize.py @@ -17,8 +17,9 @@ from calculate.optimize.misc_fcns import rates, set_bnds from calculate.optimize.fit_coeffs import fit_generic as Troe_fit +from calculate.mech_fcns import arrhenius_coefNames + Ru = ct.gas_constant -default_arrhenius_coefNames = ['activation_energy', 'pre_exponential_factor', 'temperature_exponent'] class Multithread_Optimize: def __init__(self, parent): @@ -241,10 +242,10 @@ def _set_rxn_coef_opt(self, min_T_range=500, min_P_range_factor=2): rxn_coef['coef_bnds'] = set_bnds(mech, rxnIdx, rxn_coef['key'], rxn_coef['coefName']) - if type(rxn) in [ct.ElementaryReaction, ct.ThreeBodyReaction]: + if type(rxn.rate) is ct.ArrheniusRate: P = P_median - elif type(rxn) is ct.PlogReaction: + elif type(rxn.rate) is ct.PlogRate: P = [] for PlogRxn in mech.coeffs[rxnIdx]: P.append(PlogRxn['Pressure']) @@ -252,17 +253,17 @@ def _set_rxn_coef_opt(self, min_T_range=500, min_P_range_factor=2): if len(P) < 4: P = np.geomspace(np.min(P), np.max(P), 4) - if type(rxn) is ct.FalloffReaction: + if type(rxn.rate) in [ct.FalloffRate, ct.TroeRate, ct.SriRate]: P = np.linspace(P_bnds[0], P_bnds[1], 3) # set rxn_coef dict - if type(rxn) in [ct.ElementaryReaction, ct.ThreeBodyReaction]: + if type(rxn.rate) is ct.ArrheniusRate: n_coef = len(rxn_coef['coefIdx']) rxn_coef['invT'] = np.linspace(*invT_bnds, n_coef) rxn_coef['T'] = np.divide(10000, rxn_coef['invT']) rxn_coef['P'] = np.ones_like(rxn_coef['T'])*P - elif type(rxn) in [ct.PlogReaction, ct.FalloffReaction]: + elif type(rxn.rate) in [ct.PlogRate, ct.FalloffRate, ct.TroeRate, ct.SriRate]: rxn_coef['invT'] = [] rxn_coef['P'] = [] @@ -280,7 +281,7 @@ def _set_rxn_coef_opt(self, min_T_range=500, min_P_range_factor=2): rxn_coef['P'].append(np.ones(n_coef)*P[-1]) # will evaluate HPL if HPL is constrained, else this value # set conditions for middle conditions (coefficients are always unbounded) - if type(rxn) is ct.PlogReaction: + if type(rxn.rate) is ct.PlogRate: invT = np.linspace(*invT_bnds, 3) P, invT = np.meshgrid(P[1:-1], invT) @@ -314,7 +315,7 @@ def _set_rxn_rate_opt(self): rxn = mech.gas.reaction(rxnIdx) rate_bnds_val = mech.rate_bnds[rxnIdx]['value'] rate_bnds_type = mech.rate_bnds[rxnIdx]['type'] - if type(rxn) in [ct.PlogReaction, ct.FalloffReaction]: # if falloff, change arrhenius rates to LPL/HPL if they are not constrained + if type(rxn.rate) in [ct.PlogRate, ct.FalloffRate, ct.TroeRate, ct.SriRate]: # if falloff, change arrhenius rates to LPL/HPL if they are not constrained key_list = np.array([x['coeffs_bnds'] for x in rxn_coef['key']]) key_count = collections.Counter(key_list) @@ -329,9 +330,9 @@ def _set_rxn_rate_opt(self): if np.any(rxn_coef['coef_bnds']['exist'][idx_match]) or key_count[coef_type_key] < 3: rxn_coef['is_falloff_limit'][n] = True - if type(rxn) is ct.FalloffReaction: + if type(rxn.rate) in [ct.FalloffRate, ct.TroeRate, ct.SriRate]: x = [] - for ArrheniusCoefName in default_arrhenius_coefNames: + for ArrheniusCoefName in arrhenius_coefNames: x.append(mech.coeffs_bnds[rxnIdx][coef_type_key][ArrheniusCoefName]['resetVal']) rxn_rate_opt['x0'][i+n] = np.log(x[1]) + x[2]*np.log(T) - x[0]/(Ru*T) @@ -373,17 +374,17 @@ def _update_gas(self): # TODO: What happens if a second optimization is run? for rxn_coef_idx, rxn_coef in enumerate(self.rxn_coef_opt): # TODO: RXN_COEF_OPT INCORRECT FOR CHANGING RXN TYPES rxnIdx = rxn_coef['rxnIdx'] rxn = mech.gas.reaction(rxnIdx) - if type(rxn) in [ct.ElementaryReaction, ct.ThreeBodyReaction]: + if type(rxn.rate) is ct.ArrheniusRate: continue # arrhenius type equations don't need to be converted T, P, X = rxn_coef['T'], rxn_coef['P'], rxn_coef['X'] M = lambda T, P: mech.M(rxn, [T, P, X]) rates = np.exp(self.rxn_rate_opt['x0'][i:i+len(T)]) - if type(rxn) is ct.FalloffReaction: + if type(rxn.rate) in [ct.FalloffRate, ct.TroeRate, ct.SriRate]: lb = rxn_coef['coef_bnds']['lower'] ub = rxn_coef['coef_bnds']['upper'] - if rxn.falloff.type == 'SRI': + if type(rxn.rate) is not ct.TroeRate: rxns_changed.append(rxn_coef['rxnIdx']) rxn_coef['coef_x0'] = Troe_fit(rates, T, P, X, rxnIdx, rxn_coef['key'], [], rxn_coef['is_falloff_limit'], mech, [lb, ub], accurate_fit=True) @@ -409,7 +410,7 @@ def _update_gas(self): # TODO: What happens if a second optimization is run? n = 0 for key in ['low_rate', 'high_rate']: - for coefName in default_arrhenius_coefNames: + for coefName in arrhenius_coefNames: rxn_coef['key'][n]['coeffs'] = key # change key value to match new reaction type mech.coeffs[rxnIdx][key][coefName] = rxn_coef['coef_x0'][n] # updates new arrhenius values @@ -419,7 +420,7 @@ def _update_gas(self): # TODO: What happens if a second optimization is run? # set reset_mech for new mechanism generate_new_mech = True - reset_mech[rxnIdx]['rxnType'] = 'FalloffReaction' + reset_mech[rxnIdx]['rxnType'] = 'Falloff Reaction' reset_mech[rxnIdx]['rxnCoeffs'] = mech.coeffs[rxnIdx] i += len(T) diff --git a/src/calculate/optimize/misc_fcns.py b/src/calculate/optimize/misc_fcns.py index 8e38a9d..ed3c640 100644 --- a/src/calculate/optimize/misc_fcns.py +++ b/src/calculate/optimize/misc_fcns.py @@ -3,10 +3,9 @@ # directory for license and copyright information. import numpy as np -from scipy import interpolate -from numba import jit import cantera as ct -import pathlib, sys + +from calculate.mech_fcns import arrhenius_coefNames Ru = ct.gas_constant @@ -16,19 +15,6 @@ T_min = 300 T_max = 6000 -default_arrhenius_coefNames = ['activation_energy', 'pre_exponential_factor', 'temperature_exponent'] - - -# interpolation function for Z from loss function -path = {'main': pathlib.Path(sys.argv[0]).parents[0].resolve()} -path['Z_tck_spline.dat'] = path['main'] / 'data/loss_partition_fcn_tck_spline.dat' - -tck = [] -with open(path['Z_tck_spline.dat']) as f: - for i in range(5): - tck.append(np.array(f.readline().split(','), dtype=float)) - -ln_Z = interpolate.RectBivariateSpline._from_tck(tck) def rates(rxn_coef_opt, mech): output = [] @@ -40,130 +26,13 @@ def rates(rxn_coef_opt, mech): return np.log(output) -def weighted_quantile(values, quantiles, weights=None, values_sorted=False, old_style=False): - """ https://stackoverflow.com/questions/21844024/weighted-percentile-using-numpy - Very close to numpy.percentile, but supports weights. - NOTE: quantiles should be in [0, 1]! - :param values: numpy.array with data - :param quantiles: array-like with many quantiles needed - :param sample_weight: array-like of the same length as `array` - :param values_sorted: bool, if True, then will avoid sorting of - initial array - :param old_style: if True, will correct output to be consistent - with numpy.percentile. - :return: numpy.array with computed quantiles. - """ - finite_idx = np.where(np.isfinite(values)) - values = np.array(values)[finite_idx] - quantiles = np.array(quantiles) - if weights is None or len(weights) == 0: - weights = np.ones_like(values) - else: - weights = np.array(weights)[finite_idx] - - assert np.all(quantiles >= 0) and np.all(quantiles <= 1), \ - 'quantiles should be in [0, 1]' - - if not values_sorted: - sorter = np.argsort(values) - values = values[sorter] - weights = weights[sorter] - - weighted_quantiles = np.cumsum(weights) - 0.5 * weights - if old_style: # To be convenient with numpy.percentile - weighted_quantiles -= weighted_quantiles[0] - weighted_quantiles /= weighted_quantiles[-1] - else: - weighted_quantiles /= np.sum(weights) - - return np.interp(quantiles, weighted_quantiles, values) - -def outlier(x, c=1, weights=[], max_iter=25, percentile=0.25): - def diff(x_outlier): - if len(x_outlier) < 2: - return 1 - else: - return np.diff(x_outlier)[0] - - x = np.abs(x.copy()) - percentiles = [percentile, 1-percentile] - x_outlier = [] - # define outlier with 1.5 IQR rule - for n in range(max_iter): - if diff(x_outlier) == 0: # iterate until res_outlier is the same as prior iteration - break - - if len(x_outlier) > 0: - x = x[x < x_outlier[-1]] - - [q1, q3] = weighted_quantile(x, percentiles, weights=weights) - iqr = q3 - q1 # interquartile range - - if len(x_outlier) == 2: - del x_outlier[0] - - x_outlier.append(q3 + iqr*1.5) - - x_outlier = x_outlier[-1] - - return x_outlier*c # decreasing outliers increases outlier rejection - -@jit(nopython=True, error_model='numpy') -def generalized_loss_fcn(x, mu=0, a=2, c=1): # defaults to sum of squared error - x_c_2 = ((x-mu)/c)**2 - - if a == 1: # generalized function reproduces - loss = (x_c_2 + 1)**(0.5) - 1 - if a == 2: - loss = 0.5*x_c_2 - elif a == 0: - loss = np.log(0.5*x_c_2+1) - elif a == -2: # generalized function reproduces - loss = 2*x_c_2/(x_c_2 + 4) - elif a <= -100: # supposed to be negative infinity - loss = 1 - np.exp(-0.5*x_c_2) - else: - loss = np.abs(a-2)/a*((x_c_2/np.abs(a-2) + 1)**(a/2) - 1) - - #loss = np.exp(np.log(loss) + a*np.log(c)) + mu # multiplying by c^a is not necessary, but makes order appropriate - #loss = loss*c**a + mu # multiplying by c^a is not necessary, but makes order appropriate - loss = loss + mu - - return loss - -# penalize the loss function using approximate partition function -tau_min = 1.0 -tau_max = 250.0 -def penalized_loss_fcn(x, mu=0, a=2, c=1, use_penalty=True): # defaults to sum of squared error - loss = generalized_loss_fcn(x, mu, a, c) - - if use_penalty: - tau = 10.0*c - if tau < tau_min: - tau = tau_min - elif tau > tau_max: - tau = tau_max - - penalty = np.log(c) + ln_Z(tau, a)[0][0] # approximate partition function - loss += penalty - - if not np.isfinite(loss).any(): - print(mu, a, c, penalty) - print(x) - - #non_zero_idx = np.where(loss > 0.0) - #ln_loss = np.log(loss[non_zero_idx]) - #loss[non_zero_idx] = np.exp(ln_loss + a*np.log(c)) + mu - - return loss - def set_bnds(mech, rxnIdx, keys, coefNames): rxn = mech.gas.reaction(rxnIdx) coef_bnds = {'lower': [], 'upper': [], 'exist': []} for coefNum, (key, coefName) in enumerate(zip(keys, coefNames)): - if coefName not in default_arrhenius_coefNames: continue # skip anything not Arrhenius. Falloff follows this + if coefName not in arrhenius_coefNames: continue # skip anything not Arrhenius. Falloff follows this coef_x0 = mech.coeffs_bnds[rxnIdx][key['coeffs_bnds']][coefName]['resetVal'] coef_limits = mech.coeffs_bnds[rxnIdx][key['coeffs_bnds']][coefName]['limits']() @@ -198,7 +67,7 @@ def set_bnds(mech, rxnIdx, keys, coefNames): coef_bnds['upper'].append(coef_limits[1]) coef_bnds['exist'].append([True, True]) - if type(rxn) in [ct.FalloffReaction, ct.PlogReaction]: + if type(rxn.rate) in [ct.PlogRate, ct.FalloffRate, ct.TroeRate, ct.SriRate]: for coef in ['A', 'T3', 'T1', 'T2']: coef_bnds['exist'].append([False, False]) coef_bnds['lower'].append(min_neg_system_value) diff --git a/src/calculate/optimize/optimize_worker.py b/src/calculate/optimize/optimize_worker.py index 5fb4019..c96e3f2 100644 --- a/src/calculate/optimize/optimize_worker.py +++ b/src/calculate/optimize/optimize_worker.py @@ -72,7 +72,7 @@ def trim_shocks(self): # trim shocks from zero weighted data shock['weights_trim'] = weights[exp_bounds] shock['exp_data_trim'] = shock['exp_data'][exp_bounds,:] if 'abs_uncertainties' in shock: - shock['abs_uncertainties_trim'] = shock['abs_uncertainties'][exp_bounds,:] + shock['abs_uncertainties_trim'] = shock['abs_uncertainties'][exp_bounds, :] def optimize_coeffs(self): parent = self.parent @@ -334,7 +334,7 @@ def gradient(self, x): num_gen = int(np.ceil(1E20/pop_size)) prob = pygmo.problem(pygmo_objective_fcn(self.obj_fcn, tuple(bnds))) - pop = pygmo.population(prob, pop_size) + pop = pygmo.population(prob, pop_size - 1) pop.push_back(x = x0) # puts initial guess into the initial population # all coefficients/rules should be optimized if they're to be used @@ -394,6 +394,8 @@ def rbfopt(self, x0, bnds, options): # noisy, cheap function option. supports d max_evaluations=max_eval, max_cycles=1E30, max_clock_time=max_time, + init_sample_fraction=np.size(x0) + 1, + max_random_init=np.size(x0) + 2, minlp_solver_path=path['bonmin'], nlp_solver_path=path['ipopt']) algo = rbfopt.RbfoptAlgorithm(settings, bb, init_node_pos=x0) diff --git a/src/calculate/reactors.py b/src/calculate/reactors.py index 2c9bdbe..ab48e20 100644 --- a/src/calculate/reactors.py +++ b/src/calculate/reactors.py @@ -1,11 +1,11 @@ # This file is part of Frhodo. Copyright © 2020, UChicago Argonne, LLC -# and licensed under BSD-3-Clause. See License.txt in the top-level +# and licensed under BSD-3-Clause. See License.txt in the top-level # directory for license and copyright information. import sys, os, io, stat, contextlib, pathlib, time from copy import deepcopy import cantera as ct -from cantera import interrupts, cti2yaml#, ck2yaml, ctml2yaml +from cantera import interrupts, cti2yaml # , ck2yaml, ctml2yaml import numpy as np from calculate import shock_fcns, integrate import ck2yaml @@ -13,102 +13,131 @@ # list of all possible variables -all_var = {'Laboratory Time': {'SIM_name': 't_lab', 'sub_type': None}, - 'Shockwave Time': {'SIM_name': 't_shock', 'sub_type': None}, - 'Gas Velocity': {'SIM_name': 'vel', 'sub_type': None}, - 'Temperature': {'SIM_name': 'T', 'sub_type': None}, - 'Pressure': {'SIM_name': 'P', 'sub_type': None}, - 'Enthalpy': {'SIM_name': 'h', 'sub_type': ['total', 'species']}, - 'Entropy': {'SIM_name': 's', 'sub_type': ['total', 'species']}, - 'Density': {'SIM_name': 'rho', 'sub_type': None}, - 'Density Gradient': {'SIM_name': 'drhodz', 'sub_type': ['total', 'rxn']}, - '% Density Gradient': {'SIM_name': 'perc_drhodz', 'sub_type': ['rxn']}, - '\u00B1 % |Density Gradient|': {'SIM_name': 'perc_abs_drhodz', 'sub_type': ['rxn']}, - 'Mole Fraction': {'SIM_name': 'X', 'sub_type': ['species']}, - 'Mass Fraction': {'SIM_name': 'Y', 'sub_type': ['species']}, - 'Concentration': {'SIM_name': 'conc', 'sub_type': ['species']}, - 'Net Production Rate': {'SIM_name': 'wdot', 'sub_type': ['species']}, - 'Creation Rate': {'SIM_name': 'wdotfor', 'sub_type': ['species']}, - 'Destruction Rate': {'SIM_name': 'wdotrev', 'sub_type': ['species']}, - 'Heat Release Rate': {'SIM_name': 'HRR', 'sub_type': ['total', 'rxn']}, - 'Delta Enthalpy (Heat of Reaction)':{'SIM_name': 'delta_h', 'sub_type': ['rxn']}, - 'Delta Entropy': {'SIM_name': 'delta_s', 'sub_type': ['rxn']}, - 'Equilibrium Constant': {'SIM_name': 'eq_con', 'sub_type': ['rxn']}, - 'Forward Rate Constant': {'SIM_name': 'rate_con', 'sub_type': ['rxn']}, - 'Reverse Rate Constant': {'SIM_name': 'rate_con_rev', 'sub_type': ['rxn']}, - 'Net Rate of Progress': {'SIM_name': 'net_ROP', 'sub_type': ['rxn']}, - 'Forward Rate of Progress': {'SIM_name': 'for_ROP', 'sub_type': ['rxn']}, - 'Reverse Rate of Progress': {'SIM_name': 'rev_ROP', 'sub_type': ['rxn']}} - -rev_all_var = {all_var[key]['SIM_name']: - {'name': key, 'sub_type': all_var[key]['sub_type']} for key in all_var.keys()} +all_var = { + "Laboratory Time": {"SIM_name": "t_lab", "sub_type": None}, + "Shockwave Time": {"SIM_name": "t_shock", "sub_type": None}, + "Gas Velocity": {"SIM_name": "vel", "sub_type": None}, + "Temperature": {"SIM_name": "T", "sub_type": None}, + "Pressure": {"SIM_name": "P", "sub_type": None}, + "Enthalpy": {"SIM_name": "h", "sub_type": ["total", "species"]}, + "Entropy": {"SIM_name": "s", "sub_type": ["total", "species"]}, + "Density": {"SIM_name": "rho", "sub_type": None}, + "Density Gradient": {"SIM_name": "drhodz", "sub_type": ["total", "rxn"]}, + "% Density Gradient": {"SIM_name": "perc_drhodz", "sub_type": ["rxn"]}, + "\u00B1 % |Density Gradient|": {"SIM_name": "perc_abs_drhodz", "sub_type": ["rxn"]}, + "Mole Fraction": {"SIM_name": "X", "sub_type": ["species"]}, + "Mass Fraction": {"SIM_name": "Y", "sub_type": ["species"]}, + "Concentration": {"SIM_name": "conc", "sub_type": ["species"]}, + "Net Production Rate": {"SIM_name": "wdot", "sub_type": ["species"]}, + "Creation Rate": {"SIM_name": "wdotfor", "sub_type": ["species"]}, + "Destruction Rate": {"SIM_name": "wdotrev", "sub_type": ["species"]}, + "Heat Release Rate": {"SIM_name": "HRR", "sub_type": ["total", "rxn"]}, + "Delta Enthalpy (Heat of Reaction)": {"SIM_name": "delta_h", "sub_type": ["rxn"]}, + "Delta Entropy": {"SIM_name": "delta_s", "sub_type": ["rxn"]}, + "Equilibrium Constant": {"SIM_name": "eq_con", "sub_type": ["rxn"]}, + "Forward Rate Constant": {"SIM_name": "rate_con", "sub_type": ["rxn"]}, + "Reverse Rate Constant": {"SIM_name": "rate_con_rev", "sub_type": ["rxn"]}, + "Net Rate of Progress": {"SIM_name": "net_ROP", "sub_type": ["rxn"]}, + "Forward Rate of Progress": {"SIM_name": "for_ROP", "sub_type": ["rxn"]}, + "Reverse Rate of Progress": {"SIM_name": "rev_ROP", "sub_type": ["rxn"]}, +} + +rev_all_var = { + all_var[key]["SIM_name"]: {"name": key, "sub_type": all_var[key]["sub_type"]} + for key in all_var.keys() +} # translation dictionary between SIM name and ct.SolutionArray name -SIM_Dict = {'t_lab': 't', 't_shock': 't_shock', 'z': 'z', 'A': 'A', 'vel': 'vel', 'T': 'T', 'P': 'P', - 'h_tot': 'enthalpy_mole', 'h': 'partial_molar_enthalpies', - 's_tot': 'entropy_mole', 's': 'partial_molar_entropies', - 'rho': 'density', 'drhodz_tot': 'drhodz_tot', 'drhodz': 'drhodz', 'perc_drhodz': 'perc_drhodz', - 'Y': 'Y', 'X': 'X', 'conc': 'concentrations', 'wdot': 'net_production_rates', - 'wdotfor': 'creation_rates', 'wdotrev': 'destruction_rates', - 'HRR_tot': 'heat_release_rate', 'HRR': 'heat_production_rates', - 'delta_h': 'delta_enthalpy', 'delta_s': 'delta_entropy', - 'eq_con': 'equilibrium_constants', 'rate_con': 'forward_rate_constants', - 'rate_con_rev': 'reverse_rate_constants', 'net_ROP': 'net_rates_of_progress', - 'for_ROP': 'forward_rates_of_progress', 'rev_ROP': 'reverse_rates_of_progress'} +SIM_Dict = { + "t_lab": "t", + "t_shock": "t_shock", + "z": "z", + "A": "A", + "vel": "vel", + "T": "T", + "P": "P", + "h_tot": "enthalpy_mole", + "h": "partial_molar_enthalpies", + "s_tot": "entropy_mole", + "s": "partial_molar_entropies", + "rho": "density", + "drhodz_tot": "drhodz_tot", + "drhodz": "drhodz", + "perc_drhodz": "perc_drhodz", + "Y": "Y", + "X": "X", + "conc": "concentrations", + "wdot": "net_production_rates", + "wdotfor": "creation_rates", + "wdotrev": "destruction_rates", + "HRR_tot": "heat_release_rate", + "HRR": "heat_production_rates", + "delta_h": "delta_enthalpy", + "delta_s": "delta_entropy", + "eq_con": "equilibrium_constants", + "rate_con": "forward_rate_constants", + "rate_con_rev": "reverse_rate_constants", + "net_ROP": "net_rates_of_progress", + "for_ROP": "forward_rates_of_progress", + "rev_ROP": "reverse_rates_of_progress", +} + class SIM_Property: def __init__(self, name, parent=None): self.name = name self.parent = parent self.conversion = None # this needs to be assigned per property - self.value = {'SI': np.array([]), 'CGS': np.array([])} - self.ndim = self.value['SI'].ndim + self.value = {"SI": np.array([]), "CGS": np.array([])} + self.ndim = self.value["SI"].ndim def clear(self): - self.value = {'SI': np.array([]), 'CGS': np.array([])} - self.ndim = self.value['SI'].ndim + self.value = {"SI": np.array([]), "CGS": np.array([])} + self.ndim = self.value["SI"].ndim - def __call__(self, idx=None, units='CGS'): # units must be 'CGS' or 'SI' + def __call__(self, idx=None, units="CGS"): # units must be 'CGS' or 'SI' # assumes Sim data comes in as SI and is converted to CGS # values to be calculated post-simulation - if len(self.value['SI']) == 0 or np.isnan(self.value['SI']).all(): + if len(self.value["SI"]) == 0 or np.isnan(self.value["SI"]).all(): parent = self.parent - if self.name == 'drhodz_tot': - self.value['SI'] = shock_fcns.drhodz(parent.states) + if self.name == "drhodz_tot": + self.value["SI"] = shock_fcns.drhodz(parent.states) - elif self.name == 'drhodz': - self.value['SI'] = shock_fcns.drhodz_per_rxn(parent.states) + elif self.name == "drhodz": + self.value["SI"] = shock_fcns.drhodz_per_rxn(parent.states) - elif self.name == 'perc_drhodz': - drhodz_tot = parent.drhodz_tot(units='SI')[:,None] - drhodz = parent.drhodz(units='SI').T + elif self.name == "perc_drhodz": + drhodz_tot = parent.drhodz_tot(units="SI")[:, None] + drhodz = parent.drhodz(units="SI").T if not np.any(drhodz_tot): - self.value['SI'] = np.zeros_like(drhodz) + self.value["SI"] = np.zeros_like(drhodz) else: - self.value['SI'] = drhodz/np.abs(drhodz_tot)*100 + self.value["SI"] = drhodz / np.abs(drhodz_tot) * 100 - elif self.name == 'perc_abs_drhodz': - drhodz_tot = parent.drhodz_tot(units='SI')[:,None] - drhodz = parent.drhodz(units='SI').T + elif self.name == "perc_abs_drhodz": + drhodz_tot = parent.drhodz_tot(units="SI")[:, None] + drhodz = parent.drhodz(units="SI").T if not np.any(drhodz_tot): - self.value['SI'] = np.zeros_like(drhodz) + self.value["SI"] = np.zeros_like(drhodz) else: - self.value['SI'] = drhodz/np.abs(drhodz).sum(axis=1)[:,None]*100 + self.value["SI"] = ( + drhodz / np.abs(drhodz).sum(axis=1)[:, None] * 100 + ) else: - self.value['SI'] = getattr(parent.states, SIM_Dict[self.name]) + self.value["SI"] = getattr(parent.states, SIM_Dict[self.name]) - if self.value['SI'].ndim > 1: # Transpose if matrix - self.value['SI'] = self.value['SI'].T + if self.value["SI"].ndim > 1: # Transpose if matrix + self.value["SI"] = self.value["SI"].T - self.ndim = self.value['SI'].ndim + self.ndim = self.value["SI"].ndim # currently converts entire list of properties rather than by index - if units == 'CGS' and len(self.value['CGS']) == 0: + if units == "CGS" and len(self.value["CGS"]) == 0: if self.conversion is None: - self.value['CGS'] = self.value['SI'] + self.value["CGS"] = self.value["SI"] else: - self.value['CGS'] = self.conversion(self.value['SI']) + self.value["CGS"] = self.conversion(self.value["SI"]) return self.value[units] @@ -121,58 +150,71 @@ def __init__(self, num=None, states=None, reactor_vars=[]): self.reactor_var = {} for var in reactor_vars: if var in self.rev_all_var: - self.reactor_var[self.rev_all_var[var]['name']] = var + self.reactor_var[self.rev_all_var[var]["name"]] = var - if num is None: # if no simulation stop here + if num is None: # if no simulation stop here self.reactor_var = {} return - self.conv = {'conc': 1E-3, 'wdot': 1E-3, 'P': 760/101325, 'vel': 1E2, - 'rho': 1E-3, 'drhodz_tot': 1E-5, 'drhodz': 1E-5, - 'delta_h': 1E-3/4184, 'h_tot': 1E-3/4184, 'h': 1E-3/4184, # to kcal - 'delta_s': 1/4184, 's_tot': 1/4184, 's': 1/4184, - 'eq_con': 1E3**np.array(num['reac'] - num['prod'])[:,None], - 'rate_con': np.power(1E3,num['reac']-1)[:,None], - 'rate_con_rev': np.power(1E3,num['prod']-1)[:,None], - 'net_ROP': 1E-3/3.8, # Don't understand 3.8 value - 'for_ROP': 1E-3/3.8, # Don't understand 3.8 value - 'rev_ROP': 1E-3/3.8} # Don't understand 3.8 value + self.conv = { + "conc": 1e-3, + "wdot": 1e-3, + "P": 760 / 101325, + "vel": 1e2, + "rho": 1e-3, + "drhodz_tot": 1e-5, + "drhodz": 1e-5, + "delta_h": 1e-3 / 4184, + "h_tot": 1e-3 / 4184, + "h": 1e-3 / 4184, # to kcal + "delta_s": 1 / 4184, + "s_tot": 1 / 4184, + "s": 1 / 4184, + "eq_con": 1e3 ** np.array(num["reac"] - num["prod"])[:, None], + "rate_con": np.power(1e3, num["reac"] - 1)[:, None], + "rate_con_rev": np.power(1e3, num["prod"] - 1)[:, None], + "net_ROP": 1e-3 / 3.8, # Don't understand 3.8 value + "for_ROP": 1e-3 / 3.8, # Don't understand 3.8 value + "rev_ROP": 1e-3 / 3.8, + } # Don't understand 3.8 value for name in reactor_vars: property = SIM_Property(name, parent=self) if name in self.conv: - property.conversion = lambda x, s=self.conv[name]: x*s + property.conversion = lambda x, s=self.conv[name]: x * s setattr(self, name, property) - def set_independent_var(self, ind_var, units='CGS'): + def set_independent_var(self, ind_var, units="CGS"): self.independent_var = getattr(self, ind_var)(units=units) - def set_observable(self, observable, units='CGS'): - k = observable['sub'] - if observable['main'] == 'Temperature': + def set_observable(self, observable, units="CGS"): + k = observable["sub"] + if observable["main"] == "Temperature": self.observable = self.T(units=units) - elif observable['main'] == 'Pressure': + elif observable["main"] == "Pressure": self.observable = self.P(units=units) - elif observable['main'] == 'Density Gradient': + elif observable["main"] == "Density Gradient": self.observable = self.drhodz_tot(units=units) - elif observable['main'] == 'Heat Release Rate': + elif observable["main"] == "Heat Release Rate": self.observable = self.HRR_tot(units=units) - elif observable['main'] == 'Mole Fraction': + elif observable["main"] == "Mole Fraction": self.observable = self.X(units=units) - elif observable['main'] == 'Mass Fraction': + elif observable["main"] == "Mass Fraction": self.observable = self.Y(units=units) - elif observable['main'] == 'Concentration': + elif observable["main"] == "Concentration": self.observable = self.conc(units=units) - if self.observable.ndim > 1: # reduce observable down to only plotted information + if ( + self.observable.ndim > 1 + ): # reduce observable down to only plotted information self.observable = self.observable[k] - def finalize(self, success, ind_var, observable, units='CGS'): + def finalize(self, success, ind_var, observable, units="CGS"): self.set_independent_var(ind_var, units) self.set_observable(observable, units) - + self.success = success - + class Reactor: def __init__(self, mech): @@ -180,67 +222,105 @@ def __init__(self, mech): self.ODE_success = False def run(self, reactor_choice, t_end, T_reac, P_reac, mix, **kwargs): - def list2ct_mixture(mix): # list in the form of [[species, mol_frac], [species, mol_frac],...] - return ', '.join("{!s}:{!r}".format(species, mol_frac) for (species, mol_frac) in mix) - - details = {'success': False, 'message': []} - + def list2ct_mixture( + mix, + ): # list in the form of [[species, mol_frac], [species, mol_frac],...] + return ", ".join( + "{!s}:{!r}".format(species, mol_frac) for (species, mol_frac) in mix + ) + + details = {"success": False, "message": []} + if isinstance(mix, list): mix = list2ct_mixture(mix) - + mech_out = self.mech.set_TPX(T_reac, P_reac, mix) - if not mech_out['success']: - details['success'] = False - details['message'] = mech_out['message'] + if not mech_out["success"]: + details["success"] = False + details["message"] = mech_out["message"] return None, mech_out - - #start = timer() - if reactor_choice == 'Incident Shock Reactor': - SIM, details = self.incident_shock_reactor(self.mech.gas, details, t_end, **kwargs) - elif '0d Reactor' in reactor_choice: - if reactor_choice == '0d Reactor - Constant Volume': + + # start = timer() + if reactor_choice == "Incident Shock Reactor": + SIM, details = self.incident_shock_reactor( + self.mech.gas, details, t_end, **kwargs + ) + elif "0d Reactor" in reactor_choice: + if reactor_choice == "0d Reactor - Constant Volume": reactor = ct.IdealGasReactor(self.mech.gas) - elif reactor_choice == '0d Reactor - Constant Pressure': + elif reactor_choice == "0d Reactor - Constant Pressure": reactor = ct.IdealGasConstPressureReactor(self.mech.gas) - - SIM, details = self.zero_d_ideal_gas_reactor(self.mech.gas, reactor, details, t_end, **kwargs) - - #print('{:0.1f} us'.format((timer() - start)*1E3)) + + SIM, details = self.zero_d_ideal_gas_reactor( + self.mech.gas, reactor, details, t_end, **kwargs + ) + + # print('{:0.1f} us'.format((timer() - start)*1E3)) return SIM, details - + def checkRxnRates(self, gas): - limit = [1E9, 1E15, 1E21] # reaction limit [first order, second order, third order] + limit = [ + 1e9, + 1e15, + 1e21, + ] # reaction limit [first order, second order, third order] checkRxn = [] for rxnIdx in range(gas.n_reactions): coef_sum = int(sum(gas.reaction(rxnIdx).reactants.values())) if type(gas.reactions()[rxnIdx]) is ct.ThreeBodyReaction: coef_sum += 1 - if coef_sum > 0 and coef_sum-1 <= len(limit): # check that the limit is specified - rate = [gas.forward_rate_constants[rxnIdx], gas.reverse_rate_constants[rxnIdx]] - if (np.array(rate) > limit[coef_sum-1]).any(): # if forward or reverse rate exceeds limit - checkRxn.append(rxnIdx+1) - + if coef_sum > 0 and coef_sum - 1 <= len( + limit + ): # check that the limit is specified + rate = [ + gas.forward_rate_constants[rxnIdx], + gas.reverse_rate_constants[rxnIdx], + ] + if ( + np.array(rate) > limit[coef_sum - 1] + ).any(): # if forward or reverse rate exceeds limit + checkRxn.append(rxnIdx + 1) + return checkRxn def incident_shock_reactor(self, gas, details, t_end, **kwargs): - if 'u_reac' not in kwargs or 'rho1' not in kwargs: - details['success'] = False - details['message'] = 'velocity and rho1 not specified\n' + if "u_reac" not in kwargs or "rho1" not in kwargs: + details["success"] = False + details["message"] = "velocity and rho1 not specified\n" return None, details - + # set default values - var = {'sim_int_f': 1, 'observable': {'main': 'Density Gradient', 'sub': 0}, - 'A1': 0.2, 'As': 0.2, 'L': 0.1, 't_lab_save': None, - 'ODE_solver': 'BDF', 'rtol': 1E-4, 'atol': 1E-7} + var = { + "sim_int_f": 1, + "observable": {"main": "Density Gradient", "sub": 0}, + "A1": 0.2, + "As": 0.2, + "L": 0.1, + "t_lab_save": None, + "ODE_solver": "BDF", + "rtol": 1e-4, + "atol": 1e-7, + } var.update(kwargs) - - y0 = np.hstack((0.0, var['A1'], gas.density, var['u_reac'], gas.T, 0.0, gas.Y)) # Initial condition - ode = shock_fcns.ReactorOde(gas, t_end, var['rho1'], var['L'], var['As'], var['A1'], False) - with np.errstate(over='raise', divide='raise'): + y0 = np.hstack( + (0.0, var["A1"], gas.density, var["u_reac"], gas.T, 0.0, gas.Y) + ) # Initial condition + ode = shock_fcns.ReactorOde( + gas, t_end, var["rho1"], var["L"], var["As"], var["A1"], False + ) + + with np.errstate(over="raise", divide="raise"): try: - sol = integrate.solve_ivp(ode, [0, t_end], y0, method=var['ODE_solver'], - dense_output=True, rtol=var['rtol'], atol=var['atol']) + sol = integrate.solve_ivp( + ode, + [0, t_end], + y0, + method=var["ODE_solver"], + dense_output=True, + rtol=var["rtol"], + atol=var["atol"], + ) sol_success = True sol_message = sol.message sol_t = sol.t @@ -249,185 +329,333 @@ def incident_shock_reactor(self, gas, details, t_end, **kwargs): sol_success = False sol_message = sys.exc_info()[0] sol_t = sol.t - + if sol_success: - self.ODE_success = True # this is passed to SIM to inform saving output function - details['success'] = True + self.ODE_success = ( + True # this is passed to SIM to inform saving output function + ) + details["success"] = True else: - self.ODE_success = False # this is passed to SIM to inform saving output function - details['success'] = False - + self.ODE_success = ( + False # this is passed to SIM to inform saving output function + ) + details["success"] = False + # Generate log output - explanation = '\nCheck for: Fast rates or bad thermo data' + explanation = "\nCheck for: Fast rates or bad thermo data" checkRxns = self.checkRxnRates(gas) if len(checkRxns) > 0: - explanation += '\nSuggested Reactions: ' + ', '.join([str(x) for x in checkRxns]) - details['message'] = '\nODE Error: {:s}\n{:s}\n'.format(sol_message, explanation) - - if var['sim_int_f'] > np.shape(sol_t)[0]: # in case of integration failure - var['sim_int_f'] = np.shape(sol_t)[0] - - if var['sim_int_f'] == 1: + explanation += "\nSuggested Reactions: " + ", ".join( + [str(x) for x in checkRxns] + ) + details["message"] = "\nODE Error: {:s}\n{:s}\n".format( + sol_message, explanation + ) + + if var["sim_int_f"] > np.shape(sol_t)[0]: # in case of integration failure + var["sim_int_f"] = np.shape(sol_t)[0] + + if var["sim_int_f"] == 1: t_sim = sol_t - else: # perform interpolation if integrator sample factor > 1 + else: # perform interpolation if integrator sample factor > 1 j = 0 - t_sim = np.zeros(var['sim_int_f']*(np.shape(sol_t)[0] - 1) + 1) # preallocate array - for i in range(np.shape(sol_t)[0]-1): - t_interp = np.interp(np.linspace(i, i+1, var['sim_int_f']+1), [i, i+1], sol_t[i:i+2]) - t_sim[j:j+len(t_interp)] = t_interp + t_sim = np.zeros( + var["sim_int_f"] * (np.shape(sol_t)[0] - 1) + 1 + ) # preallocate array + for i in range(np.shape(sol_t)[0] - 1): + t_interp = np.interp( + np.linspace(i, i + 1, var["sim_int_f"] + 1), + [i, i + 1], + sol_t[i : i + 2], + ) + t_sim[j : j + len(t_interp)] = t_interp j += len(t_interp) - 1 - - ind_var = 't_lab' # INDEPENDENT VARIABLE CURRENTLY HARDCODED FOR t_lab - if var['t_lab_save'] is None: # if t_save is not being sent, only plotting variables are needed + + ind_var = "t_lab" # INDEPENDENT VARIABLE CURRENTLY HARDCODED FOR t_lab + if ( + var["t_lab_save"] is None + ): # if t_save is not being sent, only plotting variables are needed t_all = t_sim else: - t_all = np.sort(np.unique(np.concatenate((t_sim, var['t_lab_save'])))) # combine t_all and t_save, sort, only unique values - - states = ct.SolutionArray(gas, extra=['t', 't_shock', 'z', 'A', 'vel', 'drhodz_tot', 'drhodz', 'perc_drhodz']) + t_all = np.sort( + np.unique(np.concatenate((t_sim, var["t_lab_save"]))) + ) # combine t_all and t_save, sort, only unique values + + states = ct.SolutionArray( + gas, + extra=[ + "t", + "t_shock", + "z", + "A", + "vel", + "drhodz_tot", + "drhodz", + "perc_drhodz", + ], + ) if self.ODE_success: - for i, t in enumerate(t_all): # calculate from solution - y = sol.sol(t) + for i, t in enumerate(t_all): # calculate from solution + y = sol.sol(t) z, A, rho, v, T, t_shock = y[0:6] Y = y[6:] - states.append(TDY=(T, rho, Y), t=t, t_shock=t_shock, z=z, A=A, vel=v, drhodz_tot=np.nan, drhodz=np.nan, perc_drhodz=np.nan) + states.append( + TDY=(T, rho, Y), + t=t, + t_shock=t_shock, + z=z, + A=A, + vel=v, + drhodz_tot=np.nan, + drhodz=np.nan, + perc_drhodz=np.nan, + ) else: - states.append(TDY=(gas.T, gas.density, gas.Y), t=0.0, t_shock=0.0, z=0.0, A=var['A1'], vel=var['u_reac'], - drhodz_tot=np.nan, drhodz=np.nan, perc_drhodz=np.nan) - - reactor_vars = ['t_lab', 't_shock', 'z', 'A', 'vel', 'T', 'P', 'h_tot', 'h', - 's_tot', 's', 'rho', 'drhodz_tot', 'drhodz', 'perc_drhodz', 'perc_abs_drhodz', - 'Y', 'X', 'conc', 'wdot', 'wdotfor', 'wdotrev', - 'HRR_tot', 'HRR', 'delta_h', 'delta_s', - 'eq_con', 'rate_con', 'rate_con_rev', 'net_ROP', 'for_ROP', 'rev_ROP'] - - num = {'reac': np.sum(gas.reactant_stoich_coeffs(), axis=0), - 'prod': np.sum(gas.product_stoich_coeffs(), axis=0), - 'rxns': gas.n_reactions} - + states.append( + TDY=(gas.T, gas.density, gas.Y), + t=0.0, + t_shock=0.0, + z=0.0, + A=var["A1"], + vel=var["u_reac"], + drhodz_tot=np.nan, + drhodz=np.nan, + perc_drhodz=np.nan, + ) + + reactor_vars = [ + "t_lab", + "t_shock", + "z", + "A", + "vel", + "T", + "P", + "h_tot", + "h", + "s_tot", + "s", + "rho", + "drhodz_tot", + "drhodz", + "perc_drhodz", + "perc_abs_drhodz", + "Y", + "X", + "conc", + "wdot", + "wdotfor", + "wdotrev", + "HRR_tot", + "HRR", + "delta_h", + "delta_s", + "eq_con", + "rate_con", + "rate_con_rev", + "net_ROP", + "for_ROP", + "rev_ROP", + ] + + num = { + "reac": np.sum(gas.reactant_stoich_coeffs(), axis=0), + "prod": np.sum(gas.product_stoich_coeffs(), axis=0), + "rxns": gas.n_reactions, + } + SIM = Simulation_Result(num, states, reactor_vars) - SIM.finalize(self.ODE_success, ind_var, var['observable'], units='CGS') - + SIM.finalize(self.ODE_success, ind_var, var["observable"], units="CGS") + return SIM, details - + def zero_d_ideal_gas_reactor(self, gas, reactor, details, t_end, **kwargs): # set default values - var = {'observable': {'main': 'Concentration', 'sub': 0}, - 't_lab_save': None, 'rtol': 1E-4, 'atol': 1E-7} - + var = { + "observable": {"main": "Concentration", "sub": 0}, + "t_lab_save": None, + "rtol": 1e-4, + "atol": 1e-7, + } + var.update(kwargs) - + # Modify reactor if necessary for frozen composition and isothermal - reactor.energy_enabled = var['solve_energy'] - reactor.chemistry_enabled = not var['frozen_comp'] - + reactor.energy_enabled = var["solve_energy"] + reactor.chemistry_enabled = not var["frozen_comp"] + # Create Sim sim = ct.ReactorNet([reactor]) - sim.atol = var['atol'] - sim.rtol = var['rtol'] - + sim.atol = var["atol"] + sim.rtol = var["rtol"] + # set up times and observables - ind_var = 't_lab' # INDEPENDENT VARIABLE CURRENTLY HARDCODED FOR t_lab - if var['t_lab_save'] is None: + ind_var = "t_lab" # INDEPENDENT VARIABLE CURRENTLY HARDCODED FOR t_lab + if var["t_lab_save"] is None: t_all = [t_end] else: - t_all = np.sort(np.unique(np.concatenate(([t_end], var['t_lab_save'])))) # combine t_end and t_save, sort, only unique values - + t_all = np.sort( + np.unique(np.concatenate(([t_end], var["t_lab_save"]))) + ) # combine t_end and t_save, sort, only unique values + self.ODE_success = True - details['success'] = True + details["success"] = True - states = ct.SolutionArray(gas, extra=['t']) - states.append(reactor.thermo.state, t = 0.0) + states = ct.SolutionArray(gas, extra=["t"]) + states.append(reactor.thermo.state, t=0.0) for t in t_all: if not self.ODE_success: break - while sim.time < t: # integrator step until time > target time + while sim.time < t: # integrator step until time > target time try: sim.step() - if sim.time > t: # force interpolation to target time + if sim.time > t: # force interpolation to target time sim.advance(t) states.append(reactor.thermo.state, t=sim.time) except: self.ODE_success = False - details['success'] = False - explanation = '\nCheck for: Fast rates or bad thermo data' + details["success"] = False + explanation = "\nCheck for: Fast rates or bad thermo data" checkRxns = self.checkRxnRates(gas) if len(checkRxns) > 0: - explanation += '\nSuggested Reactions: ' + ', '.join([str(x) for x in checkRxns]) - details['message'] = '\nODE Error: {:s}\n{:s}\n'.format(str(sys.exc_info()[1]), explanation) + explanation += "\nSuggested Reactions: " + ", ".join( + [str(x) for x in checkRxns] + ) + details["message"] = "\nODE Error: {:s}\n{:s}\n".format( + str(sys.exc_info()[1]), explanation + ) break - - reactor_vars = ['t_lab', 'T', 'P', 'h_tot', 'h', 's_tot', 's', 'rho', - 'Y', 'X', 'conc', 'wdot', 'wdotfor', 'wdotrev', 'HRR_tot', 'HRR', - 'delta_h', 'delta_s', 'eq_con', 'rate_con', 'rate_con_rev', - 'net_ROP', 'for_ROP', 'rev_ROP'] - - num = {'reac': np.sum(gas.reactant_stoich_coeffs(), axis=0), - 'prod': np.sum(gas.product_stoich_coeffs(), axis=0), - 'rxns': gas.n_reactions} - + + reactor_vars = [ + "t_lab", + "T", + "P", + "h_tot", + "h", + "s_tot", + "s", + "rho", + "Y", + "X", + "conc", + "wdot", + "wdotfor", + "wdotrev", + "HRR_tot", + "HRR", + "delta_h", + "delta_s", + "eq_con", + "rate_con", + "rate_con_rev", + "net_ROP", + "for_ROP", + "rev_ROP", + ] + + num = { + "reac": np.sum(gas.reactant_stoich_coeffs(), axis=0), + "prod": np.sum(gas.product_stoich_coeffs(), axis=0), + "rxns": gas.n_reactions, + } + SIM = Simulation_Result(num, states, reactor_vars) - SIM.finalize(self.ODE_success, ind_var, var['observable'], units='CGS') + SIM.finalize(self.ODE_success, ind_var, var["observable"], units="CGS") return SIM, details def plug_flow_reactor(self, gas, details, length, area, u_0, **kwargs): # set default values - var = {'observable': {'main': 'Concentration', 'sub': 0}, - 't_lab_save': None, 'rtol': 1E-4, 'atol': 1E-7} - + var = { + "observable": {"main": "Concentration", "sub": 0}, + "t_lab_save": None, + "rtol": 1e-4, + "atol": 1e-7, + } + var.update(kwargs) - + # Modify reactor if necessary for frozen composition and isothermal - reactor.energy_enabled = var['solve_energy'] - reactor.chemistry_enabled = not var['frozen_comp'] - + reactor.energy_enabled = var["solve_energy"] + reactor.chemistry_enabled = not var["frozen_comp"] + # Create Sim sim = ct.ReactorNet([reactor]) - sim.atol = var['atol'] - sim.rtol = var['rtol'] - + sim.atol = var["atol"] + sim.rtol = var["rtol"] + # set up times and observables - ind_var = 't_lab' # INDEPENDENT VARIABLE CURRENTLY HARDCODED FOR t_lab - if var['t_lab_save'] is None: + ind_var = "t_lab" # INDEPENDENT VARIABLE CURRENTLY HARDCODED FOR t_lab + if var["t_lab_save"] is None: t_all = [t_end] else: - t_all = np.sort(np.unique(np.concatenate(([t_end], var['t_lab_save'])))) # combine t_end and t_save, sort, only unique values - + t_all = np.sort( + np.unique(np.concatenate(([t_end], var["t_lab_save"]))) + ) # combine t_end and t_save, sort, only unique values + self.ODE_success = True - details['success'] = True + details["success"] = True - states = ct.SolutionArray(gas, extra=['t']) - states.append(reactor.thermo.state, t = 0.0) + states = ct.SolutionArray(gas, extra=["t"]) + states.append(reactor.thermo.state, t=0.0) for t in t_all: if not self.ODE_success: break - while sim.time < t: # integrator step until time > target time + while sim.time < t: # integrator step until time > target time try: sim.step() - if sim.time > t: # force interpolation to target time + if sim.time > t: # force interpolation to target time sim.advance(t) states.append(reactor.thermo.state, t=sim.time) except: self.ODE_success = False - details['success'] = False - explanation = '\nCheck for: Fast rates or bad thermo data' + details["success"] = False + explanation = "\nCheck for: Fast rates or bad thermo data" checkRxns = self.checkRxnRates(gas) if len(checkRxns) > 0: - explanation += '\nSuggested Reactions: ' + ', '.join([str(x) for x in checkRxns]) - details['message'] = '\nODE Error: {:s}\n{:s}\n'.format(str(sys.exc_info()[1]), explanation) + explanation += "\nSuggested Reactions: " + ", ".join( + [str(x) for x in checkRxns] + ) + details["message"] = "\nODE Error: {:s}\n{:s}\n".format( + str(sys.exc_info()[1]), explanation + ) break - - reactor_vars = ['t_lab', 'T', 'P', 'h_tot', 'h', 's_tot', 's', 'rho', - 'Y', 'X', 'conc', 'wdot', 'wdotfor', 'wdotrev', 'HRR_tot', 'HRR', - 'delta_h', 'delta_s', 'eq_con', 'rate_con', 'rate_con_rev', - 'net_ROP', 'for_ROP', 'rev_ROP'] - - num = {'reac': np.sum(gas.reactant_stoich_coeffs(), axis=0), - 'prod': np.sum(gas.product_stoich_coeffs(), axis=0), - 'rxns': gas.n_reactions} - + + reactor_vars = [ + "t_lab", + "T", + "P", + "h_tot", + "h", + "s_tot", + "s", + "rho", + "Y", + "X", + "conc", + "wdot", + "wdotfor", + "wdotrev", + "HRR_tot", + "HRR", + "delta_h", + "delta_s", + "eq_con", + "rate_con", + "rate_con_rev", + "net_ROP", + "for_ROP", + "rev_ROP", + ] + + num = { + "reac": np.sum(gas.reactant_stoich_coeffs(), axis=0), + "prod": np.sum(gas.product_stoich_coeffs(), axis=0), + "rxns": gas.n_reactions, + } + SIM = Simulation_Result(num, states, reactor_vars) - SIM.finalize(self.ODE_success, ind_var, var['observable'], units='CGS') + SIM.finalize(self.ODE_success, ind_var, var["observable"], units="CGS") - return SIM, details \ No newline at end of file + return SIM, details diff --git a/src/calculate/shock_fcns.py b/src/calculate/shock_fcns.py index d034c20..71ef6f3 100644 --- a/src/calculate/shock_fcns.py +++ b/src/calculate/shock_fcns.py @@ -177,8 +177,8 @@ def drhodz_per_rxn(states, L=0.1, As=0.2, A1=0.2, area_change=False, rxnNum=None T = states.T cp = states.cp_mass Wmix = states.mean_molecular_weight - nu_fwd = states.product_stoich_coeffs() - nu_rev = states.reactant_stoich_coeffs() + nu_fwd = states.product_stoich_coeffs + nu_rev = states.reactant_stoich_coeffs delta_N = np.sum(nu_fwd, axis=0) - np.sum(nu_rev, axis=0) if rxnNum is None: diff --git a/src/ck2yaml.py b/src/ck2yaml.py deleted file mode 100644 index ec69d84..0000000 --- a/src/ck2yaml.py +++ /dev/null @@ -1,2219 +0,0 @@ -#!/usr/bin/env python -# encoding: utf-8 - -# This file is part of Cantera. See License.txt in the top-level directory or -# at https://cantera.org/license.txt for license and copyright information. - -""" -ck2yaml.py: Convert Chemkin-format mechanisms to Cantera YAML input files - -Usage: - ck2yaml [--input=] - [--thermo=] - [--transport=] - [--surface=] - [--name=] - [--extra=] - [--output=] - [--permissive] - [--quiet] - [--no-validate] - [-d | --debug] - -Example: - ck2yaml --input=chem.inp --thermo=therm.dat --transport=tran.dat - -If the output file name is not given, an output file with the same name as the -input file, with the extension changed to '.yaml'. - -An input file containing only species definitions (which can be referenced from -phase definitions in other input files) can be created by specifying only a -thermo file. - -For the case of a surface mechanism, the gas phase input file should be -specified as 'input' and the surface phase input file should be specified as -'surface'. - -The '--permissive' option allows certain recoverable parsing errors (e.g. -duplicate transport data) to be ignored. The '--name=' option -is used to override default phase names (i.e. 'gas'). - -The '--extra=' option takes a YAML file as input. This option can be -used to add to the file description, or to define custom fields that are -included in the YAML output. -""" - -from collections import defaultdict, OrderedDict -import logging -import os.path -import sys -import numpy as np -import re -import itertools -import getopt -import textwrap -from email.utils import formatdate - -try: - import ruamel_yaml as yaml -except ImportError: - from ruamel import yaml - -BlockMap = yaml.comments.CommentedMap - -def FlowMap(*args, **kwargs): - m = yaml.comments.CommentedMap(*args, **kwargs) - m.fa.set_flow_style() - return m - -def FlowList(*args, **kwargs): - lst = yaml.comments.CommentedSeq(*args, **kwargs) - lst.fa.set_flow_style() - return lst - -# Improved float formatting requires Numpy >= 1.14 -if hasattr(np, 'format_float_positional'): - def float2string(data): - if data == 0: - return '0.0' - elif 0.01 <= abs(data) < 10000: - return np.format_float_positional(data, trim='0') - else: - return np.format_float_scientific(data, trim='0') -else: - def float2string(data): - return repr(data) - -def represent_float(self, data): - # type: (Any) -> Any - if data != data: - value = '.nan' - elif data == self.inf_value: - value = '.inf' - elif data == -self.inf_value: - value = '-.inf' - else: - value = float2string(data) - - return self.represent_scalar(u'tag:yaml.org,2002:float', value) - -yaml.RoundTripRepresenter.add_representer(float, represent_float) - -QUANTITY_UNITS = {'MOL': 'mol', - 'MOLE': 'mol', - 'MOLES': 'mol', - 'MOLEC': 'molec', - 'MOLECULES': 'molec'} - -ENERGY_UNITS = {'CAL/': 'cal/mol', - 'CAL/MOL': 'cal/mol', - 'CAL/MOLE': 'cal/mol', - 'EVOL': 'eV', - 'EVOLTS': 'eV', - 'JOUL': 'J/mol', - 'JOULES/MOL': 'J/mol', - 'JOULES/MOLE': 'J/mol', - 'KCAL': 'kcal/mol', - 'KCAL/MOL': 'kcal/mol', - 'KCAL/MOLE': 'kcal/mol', - 'KELV': 'K', - 'KELVIN': 'K', - 'KELVINS': 'K', - 'KJOU': 'kJ/mol', - 'KJOULES/MOL': 'kJ/mol', - 'KJOULES/MOLE': 'kJ/mol'} - -def strip_nonascii(s): - return s.encode('ascii', 'ignore').decode() - - -def compatible_quantities(quantity_basis, units): - if quantity_basis == 'mol': - return 'molec' not in units - elif quantity_basis == 'molec': - return 'molec' in units or 'mol' not in units - else: - raise ValueError('Unknown quantity basis: "{}"'.format(quantity_basis)) - - -class InputError(Exception): - """ - An exception class for exceptional behavior involving Chemkin-format - mechanism files. Pass a string describing the circumstances that caused - the exceptional behavior. - """ - def __init__(self, message, *args, **kwargs): - if args or kwargs: - super().__init__(message.format(*args, **kwargs)) - else: - super().__init__(message) - - -class Species: - def __init__(self, label, sites=None): - self.label = label - self.thermo = None - self.transport = None - self.sites = sites - self.composition = None - self.note = None - - def __str__(self): - return self.label - - @classmethod - def to_yaml(cls, representer, node): - out = BlockMap([('name', node.label), - ('composition', FlowMap(node.composition.items()))]) - if node.thermo: - out['thermo'] = node.thermo - if node.transport: - out['transport'] = node.transport - if node.sites: - out['sites'] = node.sites - if node.note: - out['note'] = node.note - return representer.represent_dict(out) - - -class Nasa7: - """ - Thermodynamic data parameterized as two seven-coefficient NASA - polynomials. - See https://cantera.org/science/science-species.html#the-nasa-7-coefficient-polynomial-parameterization - """ - def __init__(self, *, Tmin, Tmax, Tmid, low_coeffs, high_coeffs, note=''): - self.Tmin = Tmin - self.Tmax = Tmax - self.Tmid = Tmid - self.low_coeffs = low_coeffs - self.high_coeffs = high_coeffs - self.note = note - - @classmethod - def to_yaml(cls, representer, node): - out = BlockMap([('model', 'NASA7')]) - out['temperature-ranges'] = FlowList([node.Tmin, node.Tmid, node.Tmax]) - out['data'] = [FlowList(node.low_coeffs), FlowList(node.high_coeffs)] - if node.note: - note = textwrap.dedent(node.note.rstrip()) - if '\n' in note: - note = yaml.scalarstring.PreservedScalarString(note) - out['note'] = note - return representer.represent_dict(out) - - -class Nasa9: - """ - Thermodynamic data parameterized as any number of nine-coefficient NASA - polynomials. - See https://cantera.org/science/science-species.html#the-nasa-9-coefficient-polynomial-parameterization - - :param data: - List of polynomials, where each polynomial is written as - ``` - [(T_low, T_high), [a_0, a_1, ..., a_8]] - ``` - """ - def __init__(self, *, data, note=''): - self.note = note - self.data = list(sorted(data)) - self.Tranges = [self.data[0][0][0]] - for i in range(1, len(data)): - if abs(self.data[i-1][0][1] - self.data[i][0][0]) > 0.01: - raise ValueError('NASA9 polynomials contain non-adjacent temperature ranges') - self.Tranges.append(self.data[i][0][0]) - self.Tranges.append(self.data[-1][0][1]) - - @classmethod - def to_yaml(cls, representer, node): - out = BlockMap([('model', 'NASA9')]) - out['temperature-ranges'] = FlowList(node.Tranges) - out['data'] = [FlowList(poly) for (trange, poly) in node.data] - if node.note: - out['note'] = node.note - return representer.represent_dict(out) - - -class Reaction: - """ - :param index: - A unique nonnegative integer index - :param reactants: - A list of `(stoichiometry, species name)` tuples - :param products: - A list of `(stoichiometry, species name)` tuples - :param kinetics: - A `KineticsModel` instance which describes the rate constant - :param reversible: - Boolean indicating whether the reaction is reversible - :param duplicate: - Boolean indicating whether the reaction is a known (permitted) duplicate - :param forward_orders: - A dictionary specifying a non-default reaction order (value) for each - specified species (key) - :param third_body: - A string name used for the third-body species written in - pressure-dependent reaction types (usually "M") - """ - - def __init__(self, parser, index=-1, reactants=None, products=None, - kinetics=None, reversible=True, duplicate=False, - forward_orders=None, third_body=None): - self.parser = parser - self.index = index - self.reactants = reactants # list of (stoichiometry, species) tuples - self.products = products # list of (stoichiometry, species) tuples - self.kinetics = kinetics - self.reversible = reversible - self.duplicate = duplicate - self.forward_orders = forward_orders or {} - self.third_body = '' - self.comment = '' - - def _coeff_string(self, coeffs): - L = [] - for stoichiometry, species in coeffs: - if stoichiometry != 1: - L.append('{0} {1}'.format(stoichiometry, species)) - else: - L.append(str(species)) - expression = ' + '.join(L) - expression += self.kinetics.reaction_string_suffix(self.third_body) - return expression - - def __str__(self): - """ - Return a string representation of the reaction, e.g. 'A + B <=> C + D'. - """ - return '{}{}{}'.format(self._coeff_string(self.reactants), - ' <=> ' if self.reversible else ' => ', - self._coeff_string(self.products)) - - @classmethod - def to_yaml(cls, representer, node): - out = BlockMap([('equation', str(node))]) - out.yaml_add_eol_comment('Reaction {}'.format(node.index), 'equation') - if node.duplicate: - out['duplicate'] = True - node.kinetics.reduce(out) - if node.forward_orders: - out['orders'] = FlowMap(node.forward_orders) - if any((float(x) < 0 for x in node.forward_orders.values())): - out['negative-orders'] = True - node.parser.warn('Negative reaction order for reaction {} ({}).'.format( - node.index, str(node))) - reactant_names = {r[1].label for r in node.reactants} - if any((species not in reactant_names for species in node.forward_orders)): - out['nonreactant-orders'] = True - node.parser.warn('Non-reactant order for reaction {} ({}).'.format( - node.index, str(node))) - if node.comment: - comment = textwrap.dedent(node.comment.rstrip()) - if '\n' in comment: - comment = yaml.scalarstring.PreservedScalarString(comment) - out['note'] = comment - return representer.represent_dict(out) - - -class KineticsModel: - """ - A base class for kinetics models - """ - pressure_dependent = None # overloaded in derived classes - - def __init__(self): - self.efficiencies = {} - - def reaction_string_suffix(self, species): - """ - Suffix for reactant and product strings, used for pressure-dependent - reactions - """ - return '' - - def reduce(self, output): - """ - Assign data from this object to the YAML mapping ``output`` - """ - raise InputError('reduce is not implemented for objects of class {}', - self.__class__.__name__) - - -class Arrhenius: - """ - Represent a modified Arrhenius rate. - - :param A: - The pre-exponential factor, given as a tuple consisting of a floating - point value and a units string - :param b: - The temperature exponent - :param Ea: - The activation energy, given as a tuple consisting of a floating - point value and a units string - """ - def __init__(self, A=0.0, b=0.0, Ea=0.0, *, parser): - self.A = A - self.b = b - self.Ea = Ea - self.parser = parser - - def as_yaml(self, extra=()): - out = FlowMap(extra) - if compatible_quantities(self.parser.output_quantity_units, self.A[1]): - out['A'] = self.A[0] - else: - out['A'] = "{0:e} {1}".format(*self.A) - - out['b'] = self.b - - if self.Ea[1] == self.parser.output_energy_units: - out['Ea'] = self.Ea[0] - else: - out['Ea'] = "{0} {1}".format(*self.Ea) - - return out - - -class ElementaryRate(KineticsModel): - """ - A reaction rate described by a single Arrhenius expression. - See https://cantera.org/science/reactions.html#reactions-with-a-pressure-independent-rate - - :param rate: - The Arrhenius expression describing this reaction rate. - """ - pressure_dependent = False - - def __init__(self, rate, **kwargs): - KineticsModel.__init__(self, **kwargs) - self.rate = rate - - def reduce(self, output): - output['rate-constant'] = self.rate.as_yaml() - if self.rate.A[0] < 0: - output['negative-A'] = True - -class SurfaceRate(KineticsModel): - """ - An Arrhenius-like reaction occurring on a surface - See https://cantera.org/science/reactions.html#surface-reactions - - :param rate: - The Arrhenius expression describing this reaction rate. - :param coverages: - A list of tuples where each tuple specifies the coverage dependencies - for a species, in the form `(species_name, a_k, m_k, E_k)` - :param is_sticking: - True if the Arrhenius expression is a parameterization of a sticking - coefficient, rather than the rate constant itself. - :param motz_wise: - True if the sticking coefficient should be translated into a rate - coefficient using the correction factor developed by Motz & Wise for - reactions with high (near-unity) sticking coefficients - """ - pressure_dependent = False - - def __init__(self, *, rate, coverages, is_sticking, motz_wise, **kwargs): - KineticsModel.__init__(self, **kwargs) - self.rate = rate - self.coverages = coverages - self.is_sticking = is_sticking - self.motz_wise = motz_wise - - def reduce(self, output): - if self.is_sticking: - output['sticking-coefficient'] = self.rate.as_yaml() - else: - output['rate-constant'] = self.rate.as_yaml() - - if self.motz_wise is not None: - output['Motz-Wise'] = self.motz_wise - - if self.coverages: - covdeps = BlockMap() - for species,A,m,E in self.coverages: - # Energy units for coverage modification match energy units for - # base reaction - if self.rate.Ea[1] != self.rate.parser.output_energy_units: - E = '{} {}'.format(E, self.rate.Ea[1]) - covdeps[species] = FlowList([A, m, E]) - output['coverage-dependencies'] = covdeps - - -class PDepArrhenius(KineticsModel): - """ - A rate calculated by interpolating between Arrhenius expressions at - various pressures. - See https://cantera.org/science/reactions.html#pressure-dependent-arrhenius-rate-expressions-p-log - - :param pressures: - A list of pressures at which Arrhenius expressions are given. - :param pressure_units: - A string indicating the units used for the pressures - :param arrhenius: - A list of `Arrhenius` objects at each given pressure - """ - pressure_dependent = True - - def __init__(self, *, pressures, pressure_units, arrhenius, **kwargs): - KineticsModel.__init__(self, **kwargs) - self.pressures = pressures - self.pressure_units = pressure_units - self.arrhenius = arrhenius or [] - - def reduce(self, output): - output['type'] = 'pressure-dependent-Arrhenius' - rates = [] - for pressure, arrhenius in zip(self.pressures, self.arrhenius): - rates.append(arrhenius.as_yaml( - [('P', '{0} {1}'.format(pressure, self.pressure_units))])) - output['rate-constants'] = rates - - -class Chebyshev(KineticsModel): - """ - A rate calculated in terms of a bivariate Chebyshev polynomial. - See https://cantera.org/science/reactions.html#chebyshev-reaction-rate-expressions - - :param coeffs: - Matrix of Chebyshev coefficients, dimension N_T by N_P - :param Tmin: - Minimum temperature for which the parameterization is valid - :param Tmax: - Maximum temperature for which the parameterization is valid - :param Pmin: - Minimum pressure for which the parameterization is valid, given as a - `(value, units)` tuple - :param Pmax: - Maximum pressure for which the parameterization is valid, given as a - `(value, units)` tuple - :param quantity_units: - Quantity units for the rate constant - """ - pressure_dependent = True - - def __init__(self, coeffs, *, Tmin, Tmax, Pmin, Pmax, quantity_units, - **kwargs): - KineticsModel.__init__(self, **kwargs) - self.Tmin = Tmin - self.Tmax = Tmax - self.Pmin = Pmin - self.Pmax = Pmax - self.coeffs = coeffs - self.quantity_units = quantity_units - - def reaction_string_suffix(self, species): - return ' (+{})'.format(species if species else 'M') - - def reduce(self, output): - output['type'] = 'Chebyshev' - output['temperature-range'] = FlowList([self.Tmin, self.Tmax]) - output['pressure-range'] = FlowList(['{0} {1}'.format(*self.Pmin), - '{0} {1}'.format(*self.Pmax)]) - if self.quantity_units is not None: - output['units'] = FlowMap([('quantity', self.quantity_units)]) - output['data'] = [FlowList(float(v) for v in row) for row in self.coeffs] - - -class ThreeBody(KineticsModel): - """ - A rate calculated for a reaction which includes a third-body collider. - See https://cantera.org/science/reactions.html#three-body-reactions - - :param high_rate: - The Arrhenius kinetics (high-pressure limit) - :param efficiencies: - A mapping of species names to collider efficiencies - """ - pressure_dependent = True - - def __init__(self, high_rate=None, efficiencies=None, **kwargs): - KineticsModel.__init__(self, **kwargs) - self.high_rate = high_rate - self.efficiencies = efficiencies or {} - - def reaction_string_suffix(self, species): - return ' + M' - - def reduce(self, output): - output['type'] = 'three-body' - output['rate-constant'] = self.high_rate.as_yaml() - if self.high_rate.A[0] < 0: - output['negative-A'] = True - if self.efficiencies: - output['efficiencies'] = FlowMap(self.efficiencies) - - -class Falloff(ThreeBody): - """ - A rate for a pressure-dependent falloff reaction. - See https://cantera.org/science/reactions.html#falloff-reactions - - :param low_rate: - The Arrhenius kinetics at the low-pressure limit - :param high_rate: - The Arrhenius kinetics at the high-pressure limit - :param efficiencies: - A mapping of species names to collider efficiencies - :param F: - Falloff function parameterization - """ - def __init__(self, low_rate=None, F=None, **kwargs): - ThreeBody.__init__(self, **kwargs) - self.low_rate = low_rate - self.F = F - - def reaction_string_suffix(self, species): - return ' (+{})'.format(species) - - def reduce(self, output): - output['type'] = 'falloff' - output['low-P-rate-constant'] = self.low_rate.as_yaml() - output['high-P-rate-constant'] = self.high_rate.as_yaml() - if self.high_rate.A[0] < 0 and self.low_rate.A[0] < 0: - output['negative-A'] = True - if self.F: - self.F.reduce(output) - if self.efficiencies: - output['efficiencies'] = FlowMap(self.efficiencies) - - -class ChemicallyActivated(ThreeBody): - """ - A rate for a chemically-activated reaction. - See https://cantera.org/science/reactions.html#chemically-activated-reactions - - :param low_rate: - The Arrhenius kinetics at the low-pressure limit - :param high_rate: - The Arrhenius kinetics at the high-pressure limit - :param efficiencies: - A mapping of species names to collider efficiencies - :param F: - Falloff function parameterization - """ - def __init__(self, low_rate=None, F=None, **kwargs): - ThreeBody.__init__(self, **kwargs) - self.low_rate = low_rate - self.F = F - - def reaction_string_suffix(self, species): - return ' (+{})'.format(species) - - def reduce(self, output): - output['type'] = 'chemically-activated' - output['low-P-rate-constant'] = self.low_rate.as_yaml() - output['high-P-rate-constant'] = self.high_rate.as_yaml() - if self.high_rate.A[0] < 0 and self.low_rate.A[0] < 0: - output['negative-A'] = True - if self.F: - self.F.reduce(output) - if self.efficiencies: - output['efficiencies'] = FlowMap(self.efficiencies) - - -class Troe: - """ - The Troe falloff function, described with either 3 or 4 parameters. - See https://cantera.org/science/reactions.html#the-troe-falloff-function - """ - def __init__(self, A=0.0, T3=0.0, T1=0.0, T2=None): - self.A = A - self.T3 = T3 - self.T1 = T1 - self.T2 = T2 - - def reduce(self, output): - troe = FlowMap([('A', self.A), ('T3', self.T3), ('T1', self.T1)]) - if self.T2 is not None: - troe['T2'] = self.T2 - output['Troe'] = troe - - -class Sri: - """ - The SRI falloff function, described with either 3 or 5 parameters. - See https://cantera.org/science/reactions.html#the-sri-falloff-function - """ - def __init__(self, *, A, B, C, D=None, E=None): - self.A = A - self.B = B - self.C = C - self.D = D - self.E = E - - def reduce(self, output): - sri = FlowMap([('A', self.A), ('B', self.B), ('C', self.C)]) - if self.D: - sri['D'] = self.D - if self.E: - sri['E'] = self.E - - output['SRI'] = sri - - -class TransportData: - geometry_flags = ['atom', 'linear', 'nonlinear'] - - def __init__(self, label, geometry, well_depth, collision_diameter, - dipole_moment, polarizability, z_rot, note=''): - - try: - geometry = int(geometry) - except ValueError: - raise InputError( - "Bad geometry flag '{}' for species '{}', is the flag a float " - "or character? It should be an integer.", geometry, label) - if geometry not in (0, 1, 2): - raise InputError("Bad geometry flag '{}' for species '{}'", - geometry, label) - - self.geometry = self.geometry_flags[int(geometry)] - self.well_depth = float(well_depth) - self.collision_diameter = float(collision_diameter) - self.dipole_moment = float(dipole_moment) - self.polarizability = float(polarizability) - self.z_rot = float(z_rot) - self.note = note.strip() - - @classmethod - def to_yaml(cls, representer, node): - out = BlockMap([('model', 'gas'), - ('geometry', node.geometry), - ('well-depth', node.well_depth), - ('diameter', node.collision_diameter)]) - if node.dipole_moment: - out['dipole'] = node.dipole_moment - if node.polarizability: - out['polarizability'] = node.polarizability - if node.z_rot: - out['rotational-relaxation'] = node.z_rot - if node.note: - out['note'] = node.note - return representer.represent_dict(out) - - -def fortFloat(s): - """ - Convert a string representation of a floating point value to a float, - allowing for some of the peculiarities of allowable Fortran representations. - """ - return float(s.strip().lower().replace('d', 'e').replace('e ', 'e+')) - - -def get_index(seq, value): - """ - Find the first location in *seq* which contains a case-insensitive, - whitespace-insensitive match for *value*. Returns *None* if no match is - found. - """ - if isinstance(seq, str): - seq = seq.split() - value = value.lower().strip() - for i, item in enumerate(seq): - if item.lower() == value: - return i - return None - - -def contains(seq, value): - if isinstance(seq, str): - return value.lower() in seq.lower() - else: - return get_index(seq, value) is not None - - -class Surface: - def __init__(self, name, site_density): - self.name = name - self.site_density = site_density - self.species_list = [] - self.reactions = [] - - -class Parser: - def __init__(self): - self.processed_units = False - self.energy_units = 'cal/mol' # for the current REACTIONS section - self.output_energy_units = 'cal/mol' # for the output file - self.quantity_units = 'mol' # for the current REACTIONS section - self.output_quantity_units = 'mol' # for the output file - self.motz_wise = None - self.warning_as_error = True - - self.elements = [] - self.element_weights = {} # for custom elements only - self.species_list = [] # bulk species only - self.species_dict = {} # bulk and surface species - self.surfaces = [] - self.reactions = [] - self.header_lines = [] - self.extra = {} # for extra entries - self.files = [] # input file names - - def warn(self, message): - if self.warning_as_error: - raise InputError(message) - else: - logging.warning(message) - - @staticmethod - def parse_composition(elements, nElements, width): - """ - Parse the elemental composition from a 7 or 9 coefficient NASA polynomial - entry. - """ - composition = {} - for i in range(nElements): - symbol = elements[width*i:width*i+2].strip() - count = elements[width*i+2:width*i+width].strip() - if not symbol: - continue - try: - # Convert to float first for cases where ``count`` is a string - # like "2.00". - count = int(float(count)) - if count: - composition[symbol.capitalize()] = count - except ValueError: - pass - return composition - - @staticmethod - def get_rate_constant_units(length_dims, length_units, quantity_dims, - quantity_units, time_dims=1, time_units='s'): - - units = '' - if length_dims: - units += length_units - if length_dims > 1: - units += '^' + str(length_dims) - if quantity_dims: - units += '/' + quantity_units - if quantity_dims > 1: - units += '^' + str(quantity_dims) - if time_dims: - units += '/' + time_units - if time_dims > 1: - units += '^' + str(time_dims) - if units.startswith('/'): - units = '1' + units - return units - - def add_element(self, element_string): - if '/' in element_string: - name, weight, _ = element_string.split('/') - weight = fortFloat(weight) - name = name.capitalize() - self.elements.append(name) - self.element_weights[name] = weight - else: - self.elements.append(element_string.capitalize()) - - def read_NASA7_entry(self, lines, TintDefault, comments): - """ - Read a thermodynamics entry for one species in a Chemkin-format file - (consisting of two 7-coefficient NASA polynomials). Returns the label of - the species, the thermodynamics model as a :class:`Nasa7` object, and - the elemental composition of the species. - - For more details on this format, see `Debugging common errors in CK files - `__. - """ - identifier = lines[0][0:24].split() - species = identifier[0].strip() - - if len(identifier) > 1: - note = ''.join(identifier[1:]).strip() - else: - note = '' - - comments = '\n'.join(c.rstrip() for c in comments if c.strip()) - if comments and note: - note = '\n'.join((note, comments)) - elif comments: - note = comments - - # Normal method for specifying the elemental composition - composition = self.parse_composition(lines[0][24:44], 4, 5) - - # Chemkin-style extended elemental composition: additional lines - # indicated by '&' continuation character on preceding lines. Element - # names and abundances are separated by whitespace (not fixed width) - if lines[0].rstrip().endswith('&'): - complines = [] - for i in range(len(lines)-1): - if lines[i].rstrip().endswith('&'): - complines.append(lines[i+1]) - else: - break - lines = [lines[0]] + lines[i+1:] - comp = ' '.join(line.rstrip('&\n') for line in complines).split() - composition = {} - for i in range(0, len(comp), 2): - composition[comp[i].capitalize()] = int(comp[i+1]) - - # Non-standard extended elemental composition data may be located beyond - # column 80 on the first line of the thermo entry - if len(lines[0]) > 80: - elements = lines[0][80:] - composition2 = self.parse_composition(elements, len(elements)//10, 10) - composition.update(composition2) - - if not composition: - raise InputError("Error parsing elemental composition for " - "species '{}'", species) - - # Extract the NASA polynomial coefficients - # Remember that the high-T polynomial comes first! - Tmin = fortFloat(lines[0][45:55]) - Tmax = fortFloat(lines[0][55:65]) - try: - Tint = fortFloat(lines[0][65:75]) - except ValueError: - Tint = TintDefault - - high_coeffs = [fortFloat(lines[i][j:k]) - for i,j,k in [(1,0,15), (1,15,30), (1,30,45), (1,45,60), - (1,60,75), (2,0,15), (2,15,30)]] - low_coeffs = [fortFloat(lines[i][j:k]) - for i,j,k in [(2,30,45), (2,45,60), (2,60,75), (3,0,15), - (3,15,30), (3,30,45), (3,45,60)]] - - # Duplicate the valid set of coefficients if only one range is provided - if all(c == 0 for c in low_coeffs) and Tmin == Tint: - low_coeffs = high_coeffs - elif all(c == 0 for c in high_coeffs) and Tmax == Tint: - high_coeffs = low_coeffs - - # Construct and return the thermodynamics model - thermo = Nasa7(Tmin=Tmin, Tmax=Tmax, Tmid=Tint, - low_coeffs=low_coeffs, high_coeffs=high_coeffs, - note=note) - - return species, thermo, composition - - def read_NASA9_entry(self, entry, comments): - """ - Read a thermodynamics ``entry`` for one species given as one or more - 9-coefficient NASA polynomials, written in the format described in - Appendix A of NASA Reference Publication 1311 (McBride and Gordon, 1996). - Returns the label of the species, the thermodynamics model as a - :class:`Nasa9` object, and the elemental composition of the species - """ - tokens = entry[0].split() - species = tokens[0] - note = ' '.join(tokens[1:]) - N = int(entry[1][:2]) - note2 = entry[1][3:9].strip() - if note and note2: - note = '{0} [{1}]'.format(note, note2) - elif note2: - note = note2 - - comments = '\n'.join(c.rstrip() for c in comments if c.strip()) - if comments and note: - note = '\n'.join((note, comments)) - elif comments: - note = comments - - composition = self.parse_composition(entry[1][10:50], 5, 8) - - polys = [] - try: - for i in range(N): - A, B, C = entry[2+3*i:2+3*(i+1)] - Trange = [fortFloat(A[1:11]), fortFloat(A[11:21])] - coeffs = [fortFloat(B[0:16]), fortFloat(B[16:32]), - fortFloat(B[32:48]), fortFloat(B[48:64]), - fortFloat(B[64:80]), fortFloat(C[0:16]), - fortFloat(C[16:32]), fortFloat(C[48:64]), - fortFloat(C[64:80])] - polys.append((Trange, coeffs)) - except (IndexError, ValueError) as err: - raise InputError('Error while reading thermo entry for species {}:\n{}', - species, err) - - thermo = Nasa9(data=polys, note=note) - - return species, thermo, composition - - def setup_kinetics(self): - # We look for species including the next permissible character. '\n' is - # appended to the reaction string to identify the last species in the - # reaction string. Checking this character is necessary to correctly - # identify species with names ending in '+' or '='. - self.species_tokens = set() - for next_char in ('<', '=', '(', '+', '\n'): - self.species_tokens.update(k + next_char for k in self.species_dict) - self.other_tokens = {'M': 'third-body', 'm': 'third-body', - '(+M)': 'falloff3b', '(+m)': 'falloff3b', - '<=>': 'equal', '=>': 'equal', '=': 'equal', - 'HV': 'photon', 'hv': 'photon'} - self.other_tokens.update(('(+{})'.format(k), 'falloff3b: {}'.format(k)) - for k in self.species_dict) - self.Slen = max(map(len, self.other_tokens)) - - def read_kinetics_entry(self, entry, surface): - """ - Read a kinetics ``entry`` for a single reaction as loaded from a - Chemkin-format file. Returns a :class:`Reaction` object with the - reaction and its associated kinetics. - """ - - # Handle non-default units which apply to this entry - energy_units = self.energy_units - quantity_units = self.quantity_units - if 'units' in entry.lower(): - for units in sorted(QUANTITY_UNITS, key=lambda k: -len(k)): - pattern = re.compile(r'units *\/ *{} *\/'.format(re.escape(units)), - flags=re.IGNORECASE) - m = pattern.search(entry) - if m: - entry = pattern.sub('', entry) - quantity_units = QUANTITY_UNITS[units] - break - - for units in sorted(ENERGY_UNITS, key=lambda k: -len(k)): - pattern = re.compile(r'units *\/ *{} *\/'.format(re.escape(units)), - re.IGNORECASE) - m = pattern.search(entry) - if m: - entry = pattern.sub('', entry) - energy_units = ENERGY_UNITS[units] - break - - lines = entry.strip().splitlines() - - # The first line contains the reaction equation and a set of modified Arrhenius parameters - tokens = lines[0].split() - A = float(tokens[-3]) - b = float(tokens[-2]) - Ea = float(tokens[-1]) - reaction = ''.join(tokens[:-3]) + '\n' - original_reaction = reaction # for use in error messages - - # Identify tokens in the reaction expression in order of - # decreasing length - locs = {} - for i in range(self.Slen, 0, -1): - for j in range(len(reaction)-i+1): - test = reaction[j:j+i] - if test in self.species_tokens: - reaction = reaction[:j] + ' '*(i-1) + reaction[j+i-1:] - locs[j] = test[:-1], 'species' - elif test in self.other_tokens: - reaction = reaction[:j] + '\n'*i + reaction[j+i:] - locs[j] = test, self.other_tokens[test] - - # Anything that's left should be a stoichiometric coefficient or a '+' - # between species - for token in reaction.split(): - j = reaction.find(token) - i = len(token) - reaction = reaction[:j] + ' '*i + reaction[j+i:] - if token == '+': - continue - - try: - locs[j] = int(token), 'coeff' - except ValueError: - try: - locs[j] = float(token), 'coeff' - except ValueError: - raise InputError('Unexpected token "{}" in reaction expression "{}".', - token, original_reaction) - - reactants = [] - products = [] - stoichiometry = 1 - lhs = True - for token, kind in [v for k,v in sorted(locs.items())]: - if kind == 'equal': - reversible = token in ('<=>', '=') - lhs = False - elif kind == 'coeff': - stoichiometry = token - elif lhs: - reactants.append((stoichiometry, token, kind)) - stoichiometry = 1 - else: - products.append((stoichiometry, token, kind)) - stoichiometry = 1 - - if lhs: - raise InputError("Failed to find reactant/product delimiter in reaction string.") - - # Create a new Reaction object for this reaction - reaction = Reaction(reactants=[], products=[], reversible=reversible, - parser=self) - - def parse_expression(expression, dest): - third_body_name = None - third_body = False # simple third body reaction (non-falloff) - photon = False - for stoichiometry, species, kind in expression: - if kind == 'third-body': - third_body = True - third_body_name = 'M' - elif kind == 'falloff3b': - third_body_name = 'M' - elif kind.startswith('falloff3b:'): - third_body_name = kind.split()[1] - elif kind == 'photon': - photon = True - else: - dest.append((stoichiometry, self.species_dict[species])) - - return third_body_name, third_body, photon - - third_body_name_r, third_body, photon_r = parse_expression(reactants, reaction.reactants) - third_body_name_p, third_body, photon_p = parse_expression(products, reaction.products) - - if third_body_name_r != third_body_name_p: - raise InputError('Third bodies do not match: "{}" and "{}" in' - ' reaction entry:\n\n{}', third_body_name_r, third_body_name_p, entry) - - if photon_r: - raise InputError('Reactant photon not supported. ' - 'Found in reaction:\n{}', entry.strip()) - if photon_p and reversible: - self.warn('Found reversible reaction containing a product photon:' - '\n{0}\nIf the "--permissive" option was specified, this will ' - 'be converted to an irreversible reaction with the photon ' - 'removed.'.format(entry.strip())) - reaction.reversible = False - - reaction.third_body = third_body_name_r - - # Determine the appropriate units for k(T) and k(T,P) based on the number of reactants - # This assumes elementary kinetics for all reactions - rStoich = sum(r[0] for r in reaction.reactants) + (1 if third_body else 0) - if rStoich < 1: - raise InputError('No reactant species for reaction {}.', reaction) - - length_dim = 3 * (rStoich - 1) - quantity_dim = rStoich - 1 - kunits = self.get_rate_constant_units(length_dim, 'cm', - quantity_dim, quantity_units) - klow_units = self.get_rate_constant_units(length_dim + 3, 'cm', - quantity_dim + 1, quantity_units) - - # The rest of the first line contains Arrhenius parameters - arrhenius = Arrhenius( - A=(A, kunits), - b=b, - Ea=(Ea, energy_units), - parser=self - ) - - low_rate = None - high_rate = None - falloff = None - pdep_arrhenius = [] - efficiencies = {} - coverages = [] - cheb_coeffs = [] - revReaction = None - is_sticking = None - motz_wise = None - Tmin = Tmax = Pmin = Pmax = None # Chebyshev parameters - degreeT = degreeP = None - - # Note that the subsequent lines could be in any order - for line in lines[1:]: - if not line.strip(): - continue - tokens = line.split('/') - parsed = False - - if 'stick' in line.lower(): - parsed = True - is_sticking = True - - if 'mwon' in line.lower(): - parsed = True - motz_wise = True - - if 'mwoff' in line.lower(): - parsed = True - motz_wise = False - - if 'dup' in line.lower(): - # Duplicate reaction - parsed = True - reaction.duplicate = True - - if 'low' in line.lower(): - # Low-pressure-limit Arrhenius parameters for "falloff" reaction - parsed = True - tokens = tokens[1].split() - low_rate = Arrhenius( - A=(float(tokens[0].strip()), klow_units), - b=float(tokens[1].strip()), - Ea=(float(tokens[2].strip()), energy_units), - parser=self - ) - - elif 'high' in line.lower(): - # High-pressure-limit Arrhenius parameters for "chemically - # activated" reaction - parsed = True - tokens = tokens[1].split() - high_rate = Arrhenius( - A=(float(tokens[0].strip()), kunits), - b=float(tokens[1].strip()), - Ea=(float(tokens[2].strip()), energy_units), - parser=self - ) - # Need to fix units on the base reaction: - arrhenius.A = (arrhenius.A[0], klow_units) - - elif 'rev' in line.lower(): - parsed = True - reaction.reversible = False - tokens = tokens[1].split() - # If the A factor in the rev line is zero, don't create the reverse reaction - if float(tokens[0].strip()) != 0.0: - # Create a reaction proceeding in the opposite direction - revReaction = Reaction(reactants=reaction.products, - products=reaction.reactants, - third_body=reaction.third_body, - reversible=False, - parser=self) - - rev_rate = Arrhenius( - A=(float(tokens[0].strip()), klow_units), - b=float(tokens[1].strip()), - Ea=(float(tokens[2].strip()), energy_units), - parser=self - ) - if third_body: - revReaction.kinetics = ThreeBody(rev_rate) - else: - revReaction.kinetics = ElementaryRate(rev_rate) - - elif 'ford' in line.lower(): - parsed = True - tokens = tokens[1].split() - reaction.forward_orders[tokens[0].strip()] = float(tokens[1]) - - elif 'troe' in line.lower(): - # Troe falloff parameters - parsed = True - tokens = tokens[1].split() - falloff = Troe(A=float(tokens[0].strip()), - T3=float(tokens[1].strip()), - T1=float(tokens[2].strip()), - T2=float(tokens[3].strip()) if len(tokens) > 3 else None) - elif 'sri' in line.lower(): - # SRI falloff parameters - parsed = True - tokens = tokens[1].split() - A = float(tokens[0].strip()) - B = float(tokens[1].strip()) - C = float(tokens[2].strip()) - try: - D = float(tokens[3].strip()) - E = float(tokens[4].strip()) - except (IndexError, ValueError): - D = None - E = None - - if D is None or E is None: - falloff = Sri(A=A, B=B, C=C) - else: - falloff = Sri(A=A, B=B, C=C, D=D, E=E) - - elif 'cov' in line.lower(): - parsed = True - C = tokens[1].split() - coverages.append( - [C[0], fortFloat(C[1]), fortFloat(C[2]), fortFloat(C[3])]) - - elif 'cheb' in line.lower(): - # Chebyshev parameters - parsed = True - tokens = [t.strip() for t in tokens] - if contains(tokens, 'TCHEB'): - index = get_index(tokens, 'TCHEB') - tokens2 = tokens[index+1].split() - Tmin = float(tokens2[0].strip()) - Tmax = float(tokens2[1].strip()) - if contains(tokens, 'PCHEB'): - index = get_index(tokens, 'PCHEB') - tokens2 = tokens[index+1].split() - Pmin = (float(tokens2[0].strip()), 'atm') - Pmax = (float(tokens2[1].strip()), 'atm') - if contains(tokens, 'TCHEB') or contains(tokens, 'PCHEB'): - pass - elif degreeT is None or degreeP is None: - tokens2 = tokens[1].split() - degreeT = int(float(tokens2[0].strip())) - degreeP = int(float(tokens2[1].strip())) - cheb_coeffs.extend([float(t.strip()) for t in tokens2[2:]]) - else: - tokens2 = tokens[1].split() - cheb_coeffs.extend([float(t.strip()) for t in tokens2]) - - elif 'plog' in line.lower(): - # Pressure-dependent Arrhenius parameters - parsed = True - tokens = tokens[1].split() - pdep_arrhenius.append([float(tokens[0].strip()), Arrhenius( - A=(float(tokens[1].strip()), kunits), - b=float(tokens[2].strip()), - Ea=(float(tokens[3].strip()), energy_units), - parser=self - )]) - elif len(tokens) >= 2: - # Assume a list of collider efficiencies - parsed = True - for collider, efficiency in zip(tokens[0::2], tokens[1::2]): - efficiencies[collider.strip()] = float(efficiency.strip()) - - if not parsed: - raise InputError('Unparsable line:\n"""\n{}\n"""', line) - - # Decide which kinetics to keep and store them on the reaction object. - # At most one of the special cases should be true - tests = [cheb_coeffs, pdep_arrhenius, low_rate, high_rate, third_body, - surface] - if sum(bool(t) for t in tests) > 1: - raise InputError('Reaction {} contains parameters for more than ' - 'one reaction type.', original_reaction) - - if cheb_coeffs: - if Tmin is None or Tmax is None: - raise InputError('Missing TCHEB line for reaction {}', reaction) - if Pmin is None or Pmax is None: - raise InputError('Missing PCHEB line for reaction {}', reaction) - if len(cheb_coeffs) != degreeT * degreeP: - raise InputError('Incorrect number of Chebyshev coefficients. ' - 'Expected {}*{} = {} but got {}', degreeT, degreeP, - degreeT * degreeP, len(cheb_coeffs)) - if quantity_units == self.quantity_units: - quantity_units = None - reaction.kinetics = Chebyshev( - Tmin=Tmin, Tmax=Tmax, Pmin=Pmin, Pmax=Pmax, - quantity_units=quantity_units, - coeffs=np.array(cheb_coeffs, np.float64).reshape((degreeT, degreeP))) - elif pdep_arrhenius: - reaction.kinetics = PDepArrhenius( - pressures=[P for P, arrh in pdep_arrhenius], - pressure_units="atm", - arrhenius=[arrh for P, arrh in pdep_arrhenius] - ) - elif low_rate is not None: - reaction.kinetics = Falloff(high_rate=arrhenius, - low_rate=low_rate, - F=falloff, - efficiencies=efficiencies) - elif high_rate is not None: - reaction.kinetics = ChemicallyActivated(high_rate=high_rate, - low_rate=arrhenius, - F=falloff, - efficiencies=efficiencies) - elif third_body: - reaction.kinetics = ThreeBody(high_rate=arrhenius, - efficiencies=efficiencies) - elif reaction.third_body: - raise InputError('Reaction equation implies pressure ' - 'dependence but no alternate rate parameters (i.e. HIGH or ' - 'LOW) were given for reaction {}', reaction) - elif surface: - reaction.kinetics = SurfaceRate(rate=arrhenius, - coverages=coverages, - is_sticking=is_sticking, - motz_wise=motz_wise) - else: - reaction.kinetics = ElementaryRate(arrhenius) - - if revReaction: - revReaction.duplicate = reaction.duplicate - revReaction.kinetics.efficiencies = reaction.kinetics.efficiencies - - return reaction, revReaction - - def load_extra_file(self, path): - """ - Load YAML-formatted entries from ``path`` on disk. - """ - with open(path, 'rt', encoding="utf-8") as stream: - yml = yaml.round_trip_load(stream) - - # do not overwrite reserved field names - reserved = {'generator', 'input-files', 'cantera-version', 'date', - 'units', 'phases', 'species', 'reactions'} - reserved &= set(yml.keys()) - if reserved: - raise InputError("The YAML file '{}' provided as '--extra' input " - "must not redefine reserved field name: " - "'{}'".format(path, reserved)) - - # replace header lines - if 'description' in yml: - if isinstance(yml['description'], str): - if self.header_lines: - self.header_lines += [''] - self.header_lines += yml.pop('description').split('\n') - else: - raise InputError("The alternate description provided in " - "'{}' needs to be a string".format(path)) - - # remainder - self.extra = yml - - def load_chemkin_file(self, path, skip_undeclared_species=True, surface=False): - """ - Load a Chemkin-format input file from ``path`` on disk. - """ - transportLines = [] - self.line_number = 0 - - with open(path, 'r', errors='ignore') as ck_file: - - def readline(): - self.line_number += 1 - line = strip_nonascii(ck_file.readline()) - if '!' in line: - return line.split('!', 1) - elif line: - return line, '' - else: - return None, None - - # @TODO: This loop is a bit of a mess, and could probably be cleaned - # up by refactoring it into a set of methods for processing each - # input file section. - line, comment = readline() - advance = True - inHeader = True - header = [] - indent = 80 - while line is not None: - tokens = line.split() or [''] - if inHeader and not line.strip(): - header.append(comment.rstrip()) - if comment.strip() != '': # skip indent calculation if empty - indent = min(indent, re.search('[^ ]', comment).start()) - - if tokens[0].upper().startswith('ELEM'): - inHeader = False - tokens = tokens[1:] - while line is not None and get_index(line, 'END') is None: - # Grudging support for implicit end of section - start = line.strip().upper().split() - if start and start[0] in ('SPEC', 'SPECIES'): - self.warn('"ELEMENTS" section implicitly ended by start of ' - 'next section on line {0}.'.format(self.line_number)) - advance = False - tokens.pop() - break - - line, comment = readline() - # Normalize custom atomic weights - line = re.sub(r'\s*/\s*([0-9\.EeDd+-]+)\s*/', r'/\1/ ', line) - tokens.extend(line.split()) - - for token in tokens: - if token.upper() == 'END': - break - self.add_element(token) - - elif tokens[0].upper().startswith('SPEC'): - # List of species identifiers - species = tokens[1:] - inHeader = False - comments = {} - while line is not None and get_index(line, 'END') is None: - # Grudging support for implicit end of section - start = line.strip().upper().split() - if start and start[0] in ('REAC', 'REACTIONS', 'TRAN', - 'TRANSPORT', 'THER', 'THERMO'): - self.warn('"SPECIES" section implicitly ended by start of ' - 'next section on line {0}.'.format(self.line_number)) - advance = False - species.pop() - # Fix the case where there THERMO ALL or REAC UNITS - # ends the species section - if (species[-1].upper().startswith('THER') or - species[-1].upper().startswith('REAC')): - species.pop() - break - - line, comment = readline() - comment = comment.strip() - line_species = line.split() - if len(line_species) == 1 and comment: - comments[line_species[0]] = comment - species.extend(line_species) - - for token in species: - if token.upper() == 'END': - break - if token in self.species_dict: - species = self.species_dict[token] - self.warn('Found additional declaration of species {}'.format(species)) - else: - species = Species(label=token) - if token in comments: - species.note = comments[token] - self.species_dict[token] = species - self.species_list.append(species) - - elif tokens[0].upper().startswith('SITE'): - # List of species identifiers for surface species - if '/' in tokens[0]: - surf_name = tokens[0].split('/')[1] - else: - surf_name = 'surface{}'.format(len(self.surfaces)+1) - tokens = tokens[1:] - site_density = None - for token in tokens[:]: - if token.upper().startswith('SDEN/'): - site_density = fortFloat(token.split('/')[1]) - tokens.remove(token) - - if site_density is None: - raise InputError('SITE section defined with no site density') - self.surfaces.append(Surface(name=surf_name, - site_density=site_density)) - surf = self.surfaces[-1] - - inHeader = False - while line is not None and get_index(line, 'END') is None: - # Grudging support for implicit end of section - start = line.strip().upper().split() - if start and start[0] in ('REAC', 'REACTIONS', 'THER', - 'THERMO'): - self.warn('"SITE" section implicitly ended by start of ' - 'next section on line {}.'.format(self.line_number)) - advance = False - tokens.pop() - # Fix the case where there THERMO ALL or REAC UNITS - # ends the species section - if (tokens[-1].upper().startswith('THER') or - tokens[-1].upper().startswith('REAC')): - tokens.pop() - break - - line, comment = readline() - tokens.extend(line.split()) - - for token in tokens: - if token.upper() == 'END': - break - if token.count('/') == 2: - # species occupies a specific number of sites - token, sites, _ = token.split('/') - sites = float(sites) - else: - sites = None - if token in self.species_dict: - species = self.species_dict[token] - self.warn('Found additional declaration of species {0}'.format(species)) - else: - species = Species(label=token, sites=sites) - self.species_dict[token] = species - surf.species_list.append(species) - - elif tokens[0].upper().startswith('THER') and contains(line, 'NASA9'): - inHeader = False - entryLength = None - entry = [] - # Gather comments on lines preceding and within this entry - comments = [] - while line is not None and get_index(line, 'END') != 0: - # Grudging support for implicit end of section - start = line.strip().upper().split() - if start and start[0] in ('REAC', 'REACTIONS', 'TRAN', 'TRANSPORT'): - self.warn('"THERMO" section implicitly ended by start of ' - 'next section on line {0}.'.format(self.line_number)) - advance = False - tokens.pop() - break - - line, comment = readline() - comments.append(comment) - if not line: - continue - - if entryLength is None: - entryLength = 0 - # special case if (redundant) temperature ranges are - # given as the first line - try: - s = line.split() - float(s[0]), float(s[1]), float(s[2]) - continue - except (IndexError, ValueError): - pass - - entry.append(line) - if len(entry) == 2: - entryLength = 2 + 3 * int(line.split()[0]) - - if len(entry) == entryLength: - label, thermo, comp = self.read_NASA9_entry(entry, comments) - comments = [] - entry = [] - if label not in self.species_dict: - if skip_undeclared_species: - logging.info('Skipping unexpected species "{0}" while reading thermodynamics entry.'.format(label)) - continue - else: - # Add a new species entry - species = Species(label=label) - self.species_dict[label] = species - self.species_list.append(species) - else: - species = self.species_dict[label] - - # use the first set of thermo data found - if species.thermo is not None: - self.warn('Found additional thermo entry for species {0}. ' - 'If --permissive was given, the first entry is used.'.format(label)) - else: - species.thermo = thermo - species.composition = comp - - elif tokens[0].upper().startswith('THER'): - # List of thermodynamics (hopefully one per species!) - inHeader = False - line, comment = readline() - if line is not None and get_index(line, 'END') is None: - TintDefault = float(line.split()[1]) - thermo = [] - current = [] - # Gather comments on lines preceding and within this entry - comments = [comment] - while line is not None and get_index(line, 'END') != 0: - # Grudging support for implicit end of section - start = line.strip().upper().split() - if start and start[0] in ('REAC', 'REACTIONS', 'TRAN', 'TRANSPORT'): - self.warn('"THERMO" section implicitly ended by start of ' - 'next section on line {0}.'.format(self.line_number)) - advance = False - tokens.pop() - break - - if comment: - current.append('!'.join((line, comment))) - else: - current.append(line) - if len(line) >= 80 and line[79] in ['1', '2', '3', '4']: - thermo.append(line) - if line[79] == '4': - try: - label, thermo, comp = self.read_NASA7_entry(thermo, TintDefault, comments) - except Exception as e: - error_line_number = self.line_number - len(current) + 1 - error_entry = ''.join(current).rstrip() - logging.info( - 'Error while reading thermo entry starting on line {0}:\n' - '"""\n{1}\n"""'.format(error_line_number, error_entry) - ) - raise - - if label not in self.species_dict: - if skip_undeclared_species: - logging.info('Skipping unexpected species "{0}" while reading thermodynamics entry.'.format(label)) - thermo = [] - line, comment = readline() - current = [] - comments = [comment] - continue - else: - # Add a new species entry - species = Species(label=label) - self.species_dict[label] = species - self.species_list.append(species) - else: - species = self.species_dict[label] - - # use the first set of thermo data found - if species.thermo is not None: - self.warn('Found additional thermo entry for species {0}. ' - 'If --permissive was given, the first entry is used.'.format(label)) - else: - species.thermo = thermo - species.composition = comp - - thermo = [] - current = [] - comments = [] - elif thermo and thermo[-1].rstrip().endswith('&'): - # Include Chemkin-style extended elemental composition - thermo.append(line) - line, comment = readline() - comments.append(comment) - - elif tokens[0].upper().startswith('REAC'): - # Reactions section - inHeader = False - for token in tokens[1:]: - token = token.upper() - if token in ENERGY_UNITS: - self.energy_units = ENERGY_UNITS[token] - if not self.processed_units: - self.output_energy_units = ENERGY_UNITS[token] - elif token in QUANTITY_UNITS: - self.quantity_units = QUANTITY_UNITS[token] - if not self.processed_units: - self.output_quantity_units = QUANTITY_UNITS[token] - elif token == 'MWON': - self.motz_wise = True - elif token == 'MWOFF': - self.motz_wise = False - else: - raise InputError("Unrecognized token on REACTIONS line, {0!r}", token) - - self.processed_units = True - - kineticsList = [] - commentsList = [] - startLines = [] - kinetics = '' - comments = '' - - line, comment = readline() - if surface: - reactions = self.surfaces[-1].reactions - else: - reactions = self.reactions - while line is not None and get_index(line, 'END') is None: - # Grudging support for implicit end of section - start = line.strip().upper().split() - if start and start[0] in ('TRAN', 'TRANSPORT'): - self.warn('"REACTIONS" section implicitly ended by start of ' - 'next section on line {0}.'.format(self.line_number)) - advance = False - break - - lineStartsWithComment = not line and comment - line = line.rstrip() - comment = comment.rstrip() - - if '=' in line and not lineStartsWithComment: - # Finish previous record - if comment: - # End of line comment belongs with this reaction - comments += comment + '\n' - comment = '' - kineticsList.append(kinetics) - commentsList.append(comments) - startLines.append(self.line_number) - kinetics = '' - comments = '' - - if line.strip(): - kinetics += line + '\n' - if comment: - comments += comment + '\n' - - line, comment = readline() - - # Don't forget the last reaction! - if kinetics.strip() != '': - kineticsList.append(kinetics) - commentsList.append(comments) - - # We don't actually know whether comments belong to the - # previous or next reaction, but to keep them positioned - # correctly, we associate them with the next reaction. A - # comment after the last reaction is associated with that - # reaction - if kineticsList and kineticsList[0] == '': - kineticsList.pop(0) - final_comment = commentsList.pop() - if final_comment and commentsList[-1]: - commentsList[-1] = commentsList[-1].rstrip() + '\n' + final_comment - elif final_comment: - commentsList[-1] = final_comment - - self.setup_kinetics() - for kinetics, comment, line_number in zip(kineticsList, commentsList, startLines): - try: - reaction, revReaction = self.read_kinetics_entry(kinetics, surface) - except Exception as e: - self.line_number = line_number - logging.info('Error reading reaction starting on ' - 'line {0}:\n"""\n{1}\n"""'.format( - line_number, kinetics.rstrip())) - raise - reaction.line_number = line_number - reaction.comment = comment - reactions.append(reaction) - if revReaction is not None: - revReaction.line_number = line_number - reactions.append(revReaction) - - elif tokens[0].upper().startswith('TRAN'): - inHeader = False - line, comment = readline() - transport_start_line = self.line_number - while line is not None and get_index(line, 'END') is None: - # Grudging support for implicit end of section - start = line.strip().upper().split() - if start and start[0] in ('REAC', 'REACTIONS'): - self.warn('"TRANSPORT" section implicitly ended by start of ' - 'next section on line {0}.'.format(self.line_number)) - advance = False - tokens.pop() - break - - if comment: - transportLines.append('!'.join((line, comment))) - else: - transportLines.append(line) - line, comment = readline() - - elif line.strip(): - raise InputError('Section starts with unrecognized keyword' - '\n"""\n{}\n"""', line.rstrip()) - - if advance: - line, comment = readline() - else: - advance = True - - for h in header: - self.header_lines.append(h[indent:]) - - self.check_duplicate_reactions() - - for index, reaction in enumerate(self.reactions): - reaction.index = index + 1 - - if transportLines: - self.parse_transport_data(transportLines, path, transport_start_line) - - def check_duplicate_reactions(self): - """ - Check for marked (and unmarked!) duplicate reactions. Raise exception - for unmarked duplicate reactions. - - Pressure-independent and pressure-dependent reactions are treated as - different, so they don't need to be marked as duplicate. - """ - possible_duplicates = defaultdict(list) - for r in self.reactions: - # sort reactants by name, so disordered duplicate will be caught - reactants = r.reactants - reactant_names = [s[1].label for s in reactants] - reactants = [s for _, s in sorted(zip(reactant_names, reactants))] - - # sort products by name, so disordered duplicate will be caught - products = r.products - product_names = [s[1].label for s in products] - products = [s for _, s in sorted(zip(product_names, products))] - - k = (tuple(reactants), tuple(products), r.kinetics.pressure_dependent) - k_rev = (tuple(products), tuple(reactants), r.kinetics.pressure_dependent) - - # check for undeclared duplicate written in opposite direction - if (k_rev in possible_duplicates and - (r.reversible or - any([rxn.reversible for rxn in possible_duplicates[k_rev]]))): - - possible_duplicates[k_rev].append(r) - else: - possible_duplicates[k].append(r) - - for reactions in possible_duplicates.values(): - for r1,r2 in itertools.combinations(reactions, 2): - if r1.duplicate and r2.duplicate: - pass # marked duplicate reaction - elif (type(r1.kinetics) == ThreeBody and - type(r2.kinetics) != ThreeBody): - pass - elif (type(r1.kinetics) != ThreeBody and - type(r2.kinetics) == ThreeBody): - pass - elif (hasattr(r1.third_body, 'upper') and - r1.third_body.upper() == 'M' and - r1.kinetics.efficiencies.get(r2.third_body) == 0): - pass # explicit zero efficiency - elif (hasattr(r2.third_body, 'upper') and - r2.third_body.upper() == 'M' and - r2.kinetics.efficiencies.get(r1.third_body) == 0): - pass # explicit zero efficiency - elif r1.third_body != r2.third_body: - pass # distinct third bodies - else: - raise InputError( - 'Encountered unmarked duplicate reaction {} ' - '(See lines {} and {} of the input file.).', - r1, r1.line_number, r2.line_number) - - def parse_transport_data(self, lines, filename, line_offset): - """ - Parse the Chemkin-format transport data in ``lines`` (a list of strings) - and add that transport data to the previously-loaded species. - """ - - for i,line in enumerate(lines): - original_line = line - line = line.strip() - if not line or line.startswith('!'): - continue - if get_index(line, 'END') == 0: - break - - if '!' in line: - line, comment = line.split('!', 1) - else: - comment = '' - - data = line.split() - - speciesName = data[0] - if speciesName in self.species_dict: - if len(data) != 7: - raise InputError('Unable to parse line {} of {}:\n"""\n{}"""\n' - '6 transport parameters expected, but found {}.', - line_offset + i, filename, original_line, len(data)-1) - - if self.species_dict[speciesName].transport is None: - self.species_dict[speciesName].transport = TransportData(*data, note=comment) - else: - self.warn('Ignoring duplicate transport data' - ' for species "{}" on line {} of "{}".'.format( - speciesName, line_offset + i, filename)) - - - def write_yaml(self, name='gas', out_name='mech.yaml'): - emitter = yaml.YAML() - emitter.width = 70 - - emitter.register_class(Species) - emitter.register_class(Nasa7) - emitter.register_class(Nasa9) - emitter.register_class(TransportData) - emitter.register_class(Reaction) - - with open(out_name, 'w') as dest: - have_transport = True - for s in self.species_list: - if not s.transport: - have_transport = False - - surface_names = [] - n_reacting_phases = 0 - if self.reactions: - n_reacting_phases += 1 - for surf in self.surfaces: - surface_names.append(surf.name) - if surf.reactions: - n_reacting_phases += 1 - - # Write header lines - desc = '\n'.join(line.rstrip() for line in self.header_lines) - desc = desc.strip('\n') - desc = textwrap.dedent(desc) - if desc.strip(): - emitter.dump({'description': yaml.scalarstring.PreservedScalarString(desc)}, dest) - - # Additional information regarding conversion - files = [os.path.basename(f) for f in self.files] - metadata = BlockMap([ - ('generator', 'ck2yaml'), - ('input-files', FlowList(files)), - ('cantera-version', '2.5.0a4'), - ('date', formatdate(localtime=True)), - ]) - if desc.strip(): - metadata.yaml_set_comment_before_after_key('generator', before='\n') - emitter.dump(metadata, dest) - - # Write extra entries - if self.extra: - extra = BlockMap(self.extra) - key = list(self.extra.keys())[0] - extra.yaml_set_comment_before_after_key(key, before='\n') - emitter.dump(extra, dest) - - units = FlowMap([('length', 'cm'), ('time', 's')]) - units['quantity'] = self.output_quantity_units - units['activation-energy'] = self.output_energy_units - units_map = BlockMap([('units', units)]) - units_map.yaml_set_comment_before_after_key('units', before='\n') - emitter.dump(units_map, dest) - - phases = [] - reactions = [] - if name is not None: - phase = BlockMap() - phase['name'] = name - phase['thermo'] = 'ideal-gas' - phase['elements'] = FlowList(self.elements) - phase['species'] = FlowList(S.label for S in self.species_list) - if self.reactions: - phase['kinetics'] = 'gas' - if n_reacting_phases == 1: - reactions.append(('reactions', self.reactions)) - else: - rname = '{}-reactions'.format(name) - phase['reactions'] = [rname] - reactions.append((rname, self.reactions)) - if have_transport: - phase['transport'] = 'mixture-averaged' - phase['state'] = FlowMap([('T', 300.0), ('P', '1 atm')]) - phases.append(phase) - - for surf in self.surfaces: - # Write definitions for surface phases - phase = BlockMap() - phase['name'] = surf.name - phase['thermo'] = 'ideal-surface' - phase['elements'] = FlowList(self.elements) - phase['species'] = FlowList(S.label for S in surf.species_list) - phase['site-density'] = surf.site_density - if self.motz_wise is not None: - phase['Motz-Wise'] = self.motz_wise - if surf.reactions: - phase['kinetics'] = 'surface' - if n_reacting_phases == 1: - reactions.append(('reactions', surf.reactions)) - else: - rname = '{}-reactions'.format(surf.name) - phase['reactions'] = [rname] - reactions.append((rname, surf.reactions)) - phase['state'] = FlowMap([('T', 300.0), ('P', '1 atm')]) - phases.append(phase) - - if phases: - phases_map = BlockMap([('phases', phases)]) - phases_map.yaml_set_comment_before_after_key('phases', before='\n') - emitter.dump(phases_map, dest) - - # Write data on custom elements - if self.element_weights: - elements = [] - for name, weight in sorted(self.element_weights.items()): - E = BlockMap([('symbol', name), ('atomic-weight', weight)]) - elements.append(E) - elementsMap = BlockMap([('elements', elements)]) - elementsMap.yaml_set_comment_before_after_key('elements', before='\n') - emitter.dump(elementsMap, dest) - - # Write the individual species data - all_species = list(self.species_list) - for species in all_species: - if species.composition is None: - raise InputError('No thermo data found for ' - 'species {!r}'.format(species.label)) - - for surf in self.surfaces: - all_species.extend(surf.species_list) - speciesMap = BlockMap([('species', all_species)]) - speciesMap.yaml_set_comment_before_after_key('species', before='\n') - emitter.dump(speciesMap, dest) - - # Write the reactions section(s) - for label, R in reactions: - reactionsMap = BlockMap([(label, R)]) - reactionsMap.yaml_set_comment_before_after_key(label, before='\n') - emitter.dump(reactionsMap, dest) - - # Names of surface phases need to be returned so they can be imported as - # part of mechanism validation - return surface_names - - @staticmethod - def convert_mech(input_file, thermo_file=None, transport_file=None, - surface_file=None, phase_name='gas', extra_file=None, - out_name=None, quiet=False, permissive=None): - - parser = Parser() - if quiet: - logging.basicConfig(level=logging.ERROR) - else: - logging.basicConfig(level=logging.INFO) - - if permissive is not None: - parser.warning_as_error = not permissive - - if input_file: - parser.files.append(input_file) - input_file = os.path.expanduser(input_file) - if not os.path.exists(input_file): - raise IOError('Missing input file: {0!r}'.format(input_file)) - try: - # Read input mechanism files - parser.load_chemkin_file(input_file) - except Exception as err: - logging.warning("\nERROR: Unable to parse '{0}' near line {1}:\n{2}\n".format( - input_file, parser.line_number, err)) - raise - else: - phase_name = None - - if thermo_file: - parser.files.append(thermo_file) - thermo_file = os.path.expanduser(thermo_file) - if not os.path.exists(thermo_file): - raise IOError('Missing thermo file: {0!r}'.format(thermo_file)) - try: - parser.load_chemkin_file(thermo_file, - skip_undeclared_species=bool(input_file)) - except Exception: - logging.warning("\nERROR: Unable to parse '{0}' near line {1}:\n".format( - thermo_file, parser.line_number)) - raise - - if transport_file: - parser.files.append(transport_file) - transport_file = os.path.expanduser(transport_file) - if not os.path.exists(transport_file): - raise IOError('Missing transport file: {0!r}'.format(transport_file)) - with open(transport_file, 'r', errors='ignore') as f: - lines = [strip_nonascii(line) for line in f] - parser.parse_transport_data(lines, transport_file, 1) - - # Transport validation: make sure all species have transport data - for s in parser.species_list: - if s.transport is None: - raise InputError("No transport data for species '{}'.", s) - - if surface_file: - parser.files.append(surface_file) - surface_file = os.path.expanduser(surface_file) - if not os.path.exists(surface_file): - raise IOError('Missing input file: {0!r}'.format(surface_file)) - try: - # Read input mechanism files - parser.load_chemkin_file(surface_file, surface=True) - except Exception as err: - logging.warning("\nERROR: Unable to parse '{0}' near line {1}:\n{2}\n".format( - surface_file, parser.line_number, err)) - raise - - if extra_file: - parser.files.append(extra_file) - extra_file = os.path.expanduser(extra_file) - if not os.path.exists(extra_file): - raise IOError('Missing input file: {0!r}'.format(extra_file)) - try: - # Read input mechanism files - parser.load_extra_file(extra_file) - except Exception as err: - logging.warning("\nERROR: Unable to parse '{0}':\n{1}\n".format( - extra_file, err)) - raise - - if out_name: - out_name = os.path.expanduser(out_name) - else: - out_name = os.path.splitext(input_file)[0] + '.yaml' - - # Write output file - surface_names = parser.write_yaml(name=phase_name, out_name=out_name) - if not quiet: - nReactions = len(parser.reactions) + sum(len(surf.reactions) for surf in parser.surfaces) - print('Wrote YAML mechanism file to {0!r}.'.format(out_name)) - print('Mechanism contains {0} species and {1} reactions.'.format(len(parser.species_list), nReactions)) - return surface_names - - -def convert_mech(input_file, thermo_file=None, transport_file=None, - surface_file=None, phase_name='gas', extra_file=None, - out_name=None, quiet=False, permissive=None): - return Parser.convert_mech(input_file, thermo_file, transport_file, surface_file, - phase_name, extra_file, out_name, quiet, permissive) - -def main(argv): - - longOptions = ['input=', 'thermo=', 'transport=', 'surface=', 'name=', - 'extra=', 'output=', 'permissive', 'help', 'debug', 'quiet', - 'no-validate', 'id='] - - try: - optlist, args = getopt.getopt(argv, 'dh', longOptions) - options = dict() - for o,a in optlist: - options[o] = a - - if args: - raise getopt.GetoptError('Unexpected command line option: ' + - repr(' '.join(args))) - - except getopt.GetoptError as e: - print('ck2yaml.py: Error parsing arguments:') - print(e) - print('Run "ck2yaml.py --help" to see usage help.') - sys.exit(1) - - if not options or '-h' in options or '--help' in options: - print(__doc__) - sys.exit(0) - - input_file = options.get('--input') - thermo_file = options.get('--thermo') - permissive = '--permissive' in options - quiet = '--quiet' in options - transport_file = options.get('--transport') - surface_file = options.get('--surface') - - if '--id' in options: - phase_name = options.get('--id', 'gas') - logging.warning("\nFutureWarning: " - "Option '--id=...' will be replaced by '--name=...'") - else: - phase_name = options.get('--name', 'gas') - - if not input_file and not thermo_file: - print('At least one of the arguments "--input=..." or "--thermo=..."' - ' must be provided.\nRun "ck2yaml.py --help" to see usage help.') - sys.exit(1) - - extra_file = options.get('--extra') - - if '--output' in options: - out_name = options['--output'] - if not out_name.endswith('.yaml') and not out_name.endswith('.yml'): - out_name += '.yaml' - elif input_file: - out_name = os.path.splitext(input_file)[0] + '.yaml' - else: - out_name = os.path.splitext(thermo_file)[0] + '.yaml' - - surfaces = Parser.convert_mech(input_file, thermo_file, transport_file, - surface_file, phase_name, extra_file, - out_name, quiet, permissive) - - # Do full validation by importing the resulting mechanism - if not input_file: - # Can't validate input files that don't define a phase - return - - if '--no-validate' in options: - return - - try: - import cantera as ct - except ImportError: - print('WARNING: Unable to import Cantera Python module. Output ' - 'mechanism has not been validated') - sys.exit(0) - - try: - print('Validating mechanism...', end='') - gas = ct.Solution(out_name) - for surf_name in surfaces: - phase = ct.Interface(out_name, surf_name, [gas]) - print('PASSED.') - except RuntimeError as e: - print('FAILED.') - print(e) - sys.exit(1) - - -def script_entry_point(): - main(sys.argv[1:]) - -if __name__ == '__main__': - main(sys.argv[1:]) diff --git a/src/main.py b/src/main.py index e3712c9..6a8afd1 100644 --- a/src/main.py +++ b/src/main.py @@ -2,17 +2,19 @@ # -*- coding: utf-8 -*- # This file is part of Frhodo. Copyright © 2020, UChicago Argonne, LLC -# and licensed under BSD-3-Clause. See License.txt in the top-level +# and licensed under BSD-3-Clause. See License.txt in the top-level # directory for license and copyright information. -version = '1.3.1' +version = "1.3.1" import os, sys, platform, multiprocessing, pathlib, ctypes + # os.environ['QT_API'] = 'pyside2' # forces pyside2 from qtpy.QtWidgets import QMainWindow, QApplication, QMessageBox from qtpy import uic, QtCore, QtGui import numpy as np + # from timeit import default_timer as timer from plot.plot_main import All_Plots as plot @@ -20,52 +22,59 @@ from calculate import mech_fcns, reactors, convert_units import appdirs, options_panel_widgets, sim_explorer_widget import settings, config_io, save_widget, error_window, help_menu - -if os.environ['QT_API'] == 'pyside2': # Silence warning: "Qt WebEngine seems to be initialized from a plugin." + +if ( + os.environ["QT_API"] == "pyside2" +): # Silence warning: "Qt WebEngine seems to be initialized from a plugin." QApplication.setAttribute(QtCore.Qt.AA_ShareOpenGLContexts) - + # Handle high resolution displays: Minimum recommended resolution 1280 x 960 -if hasattr(QtCore.Qt, 'AA_EnableHighDpiScaling'): +if hasattr(QtCore.Qt, "AA_EnableHighDpiScaling"): QApplication.setAttribute(QtCore.Qt.AA_EnableHighDpiScaling, True) -if hasattr(QtCore.Qt, 'AA_UseHighDpiPixmaps'): +if hasattr(QtCore.Qt, "AA_UseHighDpiPixmaps"): QApplication.setAttribute(QtCore.Qt.AA_UseHighDpiPixmaps, True) # set main folder -path = {'main': pathlib.Path(sys.argv[0]).parents[0].resolve()} +path = {"main": pathlib.Path(sys.argv[0]).parents[0].resolve()} # set appdata folder using AppDirs library (but just using the source code file) -dirs = appdirs.AppDirs(appname='Frhodo', roaming=True, appauthor=False) -path['appdata'] = pathlib.Path(dirs.user_config_dir) -path['appdata'].mkdir(parents=True, exist_ok=True) # Make path if it doesn't exist -shut_down = {'bool': False} +dirs = appdirs.AppDirs(appname="Frhodo", roaming=True, appauthor=False) +path["appdata"] = pathlib.Path(dirs.user_config_dir) +path["appdata"].mkdir(parents=True, exist_ok=True) # Make path if it doesn't exist +shut_down = {"bool": False} + class Main(QMainWindow): def __init__(self, app, path): super().__init__() self.app = app self.path_set = settings.Path(self, path) - uic.loadUi(str(self.path['main']/'UI'/'main_window.ui'), self) # ~0.4 sec - self.splitter.moveSplitter(0, 1) # moves splitter 0 as close to 1 as possible - self.setWindowIcon(QtGui.QIcon(str(self.path['main']/'UI'/'graphics'/'main_icon.png'))) - + uic.loadUi(str(self.path["main"] / "UI" / "main_window.ui"), self) # ~0.4 sec + self.splitter.moveSplitter(0, 1) # moves splitter 0 as close to 1 as possible + self.setWindowIcon( + QtGui.QIcon(str(self.path["main"] / "UI" / "graphics" / "main_icon.png")) + ) + # Start threadpools self.threadpool = QtCore.QThreadPool() - self.threadpool.setMaxThreadCount(2) # Sets thread count to 1 (1 for gui - this is implicit, 1 for calc) - + self.threadpool.setMaxThreadCount( + 2 + ) # Sets thread count to 1 (1 for gui - this is implicit, 1 for calc) + # Set selected tabs for tab_widget in [self.option_tab_widget, self.plot_tab_widget]: tab_widget.setCurrentIndex(0) - + # Set Clipboard self.clipboard = QApplication.clipboard() - - self.var = {'reactor': {'t_unit_conv': 1}} + + self.var = {"reactor": {"t_unit_conv": 1}} self.SIM = reactors.Simulation_Result() self.mech_loaded = False self.run_block = True self.convert_units = convert_units.Convert_Units(self) self.series = settings.series(self) - + self.sim_explorer = sim_explorer_widget.SIM_Explorer_Widgets(self) self.plot = plot(self) options_panel_widgets.Initialize(self) @@ -76,231 +85,302 @@ def __init__(self, app, path): self.save_sim_button.clicked.connect(self.save_sim.execute) self.action_Save.triggered.connect(self.save_sim.execute) - if shut_down['bool']: + if shut_down["bool"]: sys.exit() else: self.show() - self.app.processEvents() # allow everything to draw properly - + self.app.processEvents() # allow everything to draw properly + # Initialize Settings self.initialize_settings() # ~ 4 sec # Setup help menu self.version = version help_menu.HelpMenu(self) - + def initialize_settings(self): # TODO: Solving for loaded shock twice - msgBox = MessageWindow(self, 'Loading...') + msgBox = MessageWindow(self, "Loading...") self.app.processEvents() - self.var['old_shock_choice'] = self.var['shock_choice'] = 1 - + self.var["old_shock_choice"] = self.var["shock_choice"] = 1 + self.user_settings = config_io.GUI_settings(self) self.user_settings.load() - - self.load_full_series = self.load_full_series_box.isChecked() # TODO: Move to somewhere else? - + + self.load_full_series = ( + self.load_full_series_box.isChecked() + ) # TODO: Move to somewhere else? + # load previous paths if file in path, can be accessed, and is a file - if ('path_file' in self.path and os.access(self.path['path_file'], os.R_OK) and - self.path['path_file'].is_file()): - - self.path_set.load_dir_file(self.path['path_file']) # ~3.9 sec - + if ( + "path_file" in self.path + and os.access(self.path["path_file"], os.R_OK) + and self.path["path_file"].is_file() + ): + self.path_set.load_dir_file(self.path["path_file"]) # ~3.9 sec + self.update_user_settings() - self.run_block = False # Block multiple simulations from running during initialization - self.run_single() # Attempt simulation after initialization completed + self.run_block = ( + False # Block multiple simulations from running during initialization + ) + self.run_single() # Attempt simulation after initialization completed msgBox.close() - - def load_mech(self, event = None): + + def load_mech(self, event=None): def mechhasthermo(mech_path): - f = open(mech_path, 'r') + f = open(mech_path, "r") while True: line = f.readline() - if '!' in line[0:2]: + if "!" in line[0:2]: continue - if 'ther' in line.split('!')[0].strip().lower(): + if "ther" in line.split("!")[0].strip().lower(): return True - + if not line: break - + f.close() return False - - if self.mech_select_comboBox.count() == 0: return # if no items return, unsure if this is needed now - + + if self.mech_select_comboBox.count() == 0: + return # if no items return, unsure if this is needed now + # Specify mech file path - self.path['mech'] = self.path['mech_main'] / str(self.mech_select_comboBox.currentText()) - if not self.path['mech'].is_file(): # if it's not a file, then it was deleted - self.path_set.mech() # update mech pulldown choices + self.path["mech"] = self.path["mech_main"] / str( + self.mech_select_comboBox.currentText() + ) + if not self.path["mech"].is_file(): # if it's not a file, then it was deleted + self.path_set.mech() # update mech pulldown choices return - + # Check use thermo box viability - if mechhasthermo(self.path['mech']): + if mechhasthermo(self.path["mech"]): if self.thermo_select_comboBox.count() == 0: - self.use_thermo_file_box.setDisabled(True) # disable checkbox if no thermo in mech file + self.use_thermo_file_box.setDisabled( + True + ) # disable checkbox if no thermo in mech file else: self.use_thermo_file_box.setEnabled(True) # Autoselect checkbox off if thermo exists in mech - if self.sender() is None or 'use_thermo_file_box' not in self.sender().objectName(): - self.use_thermo_file_box.blockSignals(True) # stop set from sending signal, causing double load - self.use_thermo_file_box.setChecked(False) - self.use_thermo_file_box.blockSignals(False) # allow signals again + if ( + self.sender() is None + or "use_thermo_file_box" not in self.sender().objectName() + ): + self.use_thermo_file_box.blockSignals( + True + ) # stop set from sending signal, causing double load + self.use_thermo_file_box.setChecked(False) + self.use_thermo_file_box.blockSignals(False) # allow signals again else: - self.use_thermo_file_box.blockSignals(True) # stop set from sending signal, causing double load + self.use_thermo_file_box.blockSignals( + True + ) # stop set from sending signal, causing double load self.use_thermo_file_box.setChecked(True) - self.use_thermo_file_box.blockSignals(False) # allow signals again - self.use_thermo_file_box.setDisabled(True) # disable checkbox if no thermo in mech file + self.use_thermo_file_box.blockSignals(False) # allow signals again + self.use_thermo_file_box.setDisabled( + True + ) # disable checkbox if no thermo in mech file # Enable thermo select based on use_thermo_file_box if self.use_thermo_file_box.isChecked(): self.thermo_select_comboBox.setEnabled(True) else: self.thermo_select_comboBox.setDisabled(True) - - # Specify thermo file path + + # Specify thermo file path if self.use_thermo_file_box.isChecked(): if self.thermo_select_comboBox.count() > 0: - self.path['thermo'] = self.path['mech_main'] / str(self.thermo_select_comboBox.currentText()) + self.path["thermo"] = self.path["mech_main"] / str( + self.thermo_select_comboBox.currentText() + ) else: - self.log.append('Error loading mech:\nNo thermodynamics given', alert=True) + self.log.append( + "Error loading mech:\nNo thermodynamics given", alert=True + ) return else: - self.path['thermo'] = None - + self.path["thermo"] = None + # Initialize Mechanism - self.log.clear([]) # Clear log when mechanism changes to avoid log errors about prior mech + self.log.clear( + [] + ) # Clear log when mechanism changes to avoid log errors about prior mech mech_load_output = self.mech.load_mechanism(self.path) - self.log.append(mech_load_output['message'], alert=not mech_load_output['success']) - self.mech_loaded = mech_load_output['success'] - - if not mech_load_output['success']: # if error: update species and return + self.log.append( + mech_load_output["message"], alert=not mech_load_output["success"] + ) + self.mech_loaded = mech_load_output["success"] + + if not mech_load_output["success"]: # if error: update species and return self.mix.update_species() - self.log._blink(True) # updating_species is causing blink to stop due to successful shock calculation + self.log._blink( + True + ) # updating_species is causing blink to stop due to successful shock calculation return - + # Initialize tables and trees self.tree.set_trees(self.mech) - self.mix.update_species() # this was commented out, could be because multiple calls to solver from update_mix / setItems - + self.mix.update_species() # this was commented out, could be because multiple calls to solver from update_mix / setItems + tabIdx = self.plot_tab_widget.currentIndex() tabText = self.plot_tab_widget.tabText(tabIdx) - if tabText == 'Signal/Sim': + if tabText == "Signal/Sim": # Force observable_widget to update - observable = self.plot.observable_widget.widget['main_parameter'].currentText() - self.plot.observable_widget.widget['main_parameter'].currentIndexChanged[str].emit(observable) - elif tabText == 'Sim Explorer': # TODO: This gets called twice? + observable = self.plot.observable_widget.widget[ + "main_parameter" + ].currentText() + self.plot.observable_widget.widget["main_parameter"].currentIndexChanged[ + str + ].emit(observable) + elif tabText == "Sim Explorer": # TODO: This gets called twice? self.sim_explorer.update_all_main_parameter() - + def shock_choice_changed(self, event): - if 'exp_main' in self.directory.invalid: # don't allow shock change if problem with exp directory + if ( + "exp_main" in self.directory.invalid + ): # don't allow shock change if problem with exp directory return - - self.var['old_shock_choice'] = self.var['shock_choice'] - self.var['shock_choice'] = event - - self.shockRollingList = ['P1', 'u1'] # reset rolling list + + self.var["old_shock_choice"] = self.var["shock_choice"] + self.var["shock_choice"] = event + + self.shockRollingList = ["P1", "u1"] # reset rolling list self.rxn_change_history = [] # reset tracking of rxn numbers changed - + if not self.optimize_running: self.log.clear([]) - self.series.change_shock() # link display_shock to correct set and - - def update_user_settings(self, event = None): + self.series.change_shock() # link display_shock to correct set and + + def update_user_settings(self, event=None): # This is one is located on the Files tab shock = self.display_shock - self.series.set('series_name', self.exp_series_name_box.text()) - - t_unit_conv = self.var['reactor']['t_unit_conv'] - if self.time_offset_box.value()*t_unit_conv != shock['time_offset']: # if values are different - self.series.set('time_offset', self.time_offset_box.value()*t_unit_conv) - if hasattr(self.mech_tree, 'rxn'): # checked if copy valid in function - self.tree._copy_expanded_tab_rates() # copy rates and time offset - - self.var['time_unc'] = self.time_unc_box.value()*t_unit_conv - + self.series.set("series_name", self.exp_series_name_box.text()) + + t_unit_conv = self.var["reactor"]["t_unit_conv"] + if ( + self.time_offset_box.value() * t_unit_conv != shock["time_offset"] + ): # if values are different + self.series.set("time_offset", self.time_offset_box.value() * t_unit_conv) + if hasattr(self.mech_tree, "rxn"): # checked if copy valid in function + self.tree._copy_expanded_tab_rates() # copy rates and time offset + + self.var["time_unc"] = self.time_unc_box.value() * t_unit_conv + # self.user_settings.save() # saves settings everytime a variable is changed if event is not None: sender = self.sender().objectName() - if 'time_offset' in sender and hasattr(self, 'SIM'): # Don't rerun SIM if it exists - if hasattr(self.SIM, 'independent_var') and hasattr(self.SIM, 'observable'): - self.plot.signal.update_sim(self.SIM.independent_var, self.SIM.observable) - elif any(x in sender for x in ['end_time', 'sim_interp_factor', 'ODE_solver', 'rtol', 'atol']): + if "time_offset" in sender and hasattr( + self, "SIM" + ): # Don't rerun SIM if it exists + if hasattr(self.SIM, "independent_var") and hasattr( + self.SIM, "observable" + ): + self.plot.signal.update_sim( + self.SIM.independent_var, self.SIM.observable + ) + elif any( + x in sender + for x in ["end_time", "sim_interp_factor", "ODE_solver", "rtol", "atol"] + ): self.run_single() - elif self.display_shock['exp_data'].size > 0: # If exp_data exists + elif self.display_shock["exp_data"].size > 0: # If exp_data exists self.plot.signal.update(update_lim=False) self.plot.signal.canvas.draw() - ''' + """ # debug for i in self.var: print('key: {:<14s} value: {:<16s}'.format(i, str(self.var[i]))) - ''' - - def keyPressEvent(self, event): pass - # THIS IS NOT FULLY FUNCTIONING - # http://ftp.ics.uci.edu/pub/centos0/ics-custom-build/BUILD/PyQt-x11-gpl-4.7.2/doc/html/qkeyevent.html - # print(event.modifiers(),event.text()) - + """ + + def keyPressEvent(self, event): + pass + + # THIS IS NOT FULLY FUNCTIONING + # http://ftp.ics.uci.edu/pub/centos0/ics-custom-build/BUILD/PyQt-x11-gpl-4.7.2/doc/html/qkeyevent.html + # print(event.modifiers(),event.text()) + def run_single(self, event=None, t_save=None, rxn_changed=False): - if self.run_block: return - if not self.mech_loaded: return # if mech isn't loaded successfully, exit - if not hasattr(self.mech_tree, 'rxn'): return # if mech tree not set up, exit - + if self.run_block: + return + if not self.mech_loaded: + return # if mech isn't loaded successfully, exit + if not hasattr(self.mech_tree, "rxn"): + return # if mech tree not set up, exit + shock = self.display_shock - - T_reac, P_reac, mix = shock['T_reactor'], shock['P_reactor'], shock['thermo_mix'] + + T_reac, P_reac, mix = ( + shock["T_reactor"], + shock["P_reactor"], + shock["thermo_mix"], + ) self.tree.update_rates() - + # calculate all properties or observable by sending t_save tabIdx = self.plot_tab_widget.currentIndex() tabText = self.plot_tab_widget.tabText(tabIdx) - if tabText == 'Sim Explorer': + if tabText == "Sim Explorer": t_save = np.array([0]) - - SIM_kwargs = {'u_reac': shock['u2'], 'rho1': shock['rho1'], 'observable': self.display_shock['observable'], - 't_lab_save': t_save, 'sim_int_f': self.var['reactor']['sim_interp_factor'], - 'ODE_solver': self.var['reactor']['ode_solver'], - 'rtol': self.var['reactor']['ode_rtol'], 'atol': self.var['reactor']['ode_atol']} - - if '0d Reactor' in self.var['reactor']['name']: - SIM_kwargs['solve_energy'] = self.var['reactor']['solve_energy'] - SIM_kwargs['frozen_comp'] = self.var['reactor']['frozen_comp'] - - self.SIM, verbose = self.mech.run(self.var['reactor']['name'], self.var['reactor']['t_end'], - T_reac, P_reac, mix, **SIM_kwargs) - - if verbose['success']: + + SIM_kwargs = { + "u_reac": shock["u2"], + "rho1": shock["rho1"], + "observable": self.display_shock["observable"], + "t_lab_save": t_save, + "sim_int_f": self.var["reactor"]["sim_interp_factor"], + "ODE_solver": self.var["reactor"]["ode_solver"], + "rtol": self.var["reactor"]["ode_rtol"], + "atol": self.var["reactor"]["ode_atol"], + } + + if "0d Reactor" in self.var["reactor"]["name"]: + SIM_kwargs["solve_energy"] = self.var["reactor"]["solve_energy"] + SIM_kwargs["frozen_comp"] = self.var["reactor"]["frozen_comp"] + + self.SIM, verbose = self.mech.run( + self.var["reactor"]["name"], + self.var["reactor"]["t_end"], + T_reac, + P_reac, + mix, + **SIM_kwargs + ) + + if verbose["success"]: self.log._blink(False) else: - self.log.append(verbose['message']) - + self.log.append(verbose["message"]) + if self.SIM is not None: - self.plot.signal.update_sim(self.SIM.independent_var, self.SIM.observable, rxn_changed) - if tabText == 'Sim Explorer': + self.plot.signal.update_sim( + self.SIM.independent_var, self.SIM.observable, rxn_changed + ) + if tabText == "Sim Explorer": self.sim_explorer.populate_main_parameters() - self.sim_explorer.update_plot(self.SIM) # sometimes duplicate updates + self.sim_explorer.update_plot(self.SIM) # sometimes duplicate updates else: nan = np.array([np.nan, np.nan]) - self.plot.signal.update_sim(nan, nan) # make sim plot blank - if tabText == 'Sim Explorer': + self.plot.signal.update_sim(nan, nan) # make sim plot blank + if tabText == "Sim Explorer": self.sim_explorer.update_plot(None) - return # If mech error exit function + return # If mech error exit function # def raise_error(self): - # assert False + # assert False + - -if __name__ == '__main__': - if platform.system() == 'Windows': # this is required for pyinstaller on windows +if __name__ == "__main__": + if platform.system() == "Windows": # this is required for pyinstaller on windows multiprocessing.freeze_support() - if getattr(sys, 'frozen', False): # if frozen minimize console immediately - ctypes.windll.user32.ShowWindow(ctypes.windll.kernel32.GetConsoleWindow(), 0) - + if getattr(sys, "frozen", False): # if frozen minimize console immediately + ctypes.windll.user32.ShowWindow( + ctypes.windll.kernel32.GetConsoleWindow(), 0 + ) + app = QApplication(sys.argv) sys.excepthook = error_window.excepthookDecorator(app, path, shut_down) main = Main(app, path) sys.exit(app.exec_()) - diff --git a/src/mech_widget.py b/src/mech_widget.py index 1963f32..6c7ef01 100644 --- a/src/mech_widget.py +++ b/src/mech_widget.py @@ -14,6 +14,17 @@ from timeit import default_timer as timer +from calculate.mech_fcns import arrhenius_coefNames + + +default_coef_abbreviation = { + "pre_exponential_factor": "A", + "temperature_exponent": "n", + "activation_energy": "Ea"} + +coef_abbreviation = {key: default_coef_abbreviation[key] for key in arrhenius_coefNames} + + def silentSetValue(obj, value): obj.blockSignals(True) # stop changing text from signaling obj.setValue(value) @@ -97,21 +108,21 @@ def get_coef_abbreviation(coefName): parent = self.parent() data = [] for rxnIdx, rxn in enumerate(mech.gas.reactions()): - rxn_type = rxn.__class__.__name__.replace('Reaction', ' Reaction') - if type(rxn) in [ct.ElementaryReaction, ct.ThreeBodyReaction]: + rxn_type = mech.reaction_type(rxn) + + if type(rxn.rate) is ct.ArrheniusRate: coeffs = [] # Setup Coeffs for Tree - for coefName, coefVal in mech.coeffs[rxnIdx][0].items(): - coefAbbr = get_coef_abbreviation(coefName) + for coefName, coefAbbr in coef_abbreviation.items(): coeffs.append([coefAbbr, coefName, mech.coeffs[rxnIdx][0]]) - coeffs_order = [1, 2, 0] # Reorder coeffs into A, n, Ea + coeffs_order = [1, 2, 0] # order coeffs into A, n, Ea - data.append({'num': rxnIdx, 'eqn': rxn.equation, 'type': 'Arrhenius', + data.append({'num': rxnIdx, 'eqn': rxn.equation, 'type': rxn_type, 'coeffs': coeffs, 'coeffs_order': coeffs_order}) - elif type(rxn) in [ct.PlogReaction, ct.FalloffReaction]: + elif type(rxn.rate) in [ct.PlogRate, ct.FalloffRate, ct.TroeRate, ct.SriRate]: coeffs = [] for key in ['high', 'low']: - if type(rxn) is ct.PlogReaction: + if type(rxn.rate) is ct.PlogRate: if key == 'high': n = len(mech.coeffs[rxnIdx]) - 1 else: @@ -119,12 +130,10 @@ def get_coef_abbreviation(coefName): else: n = f'{key}_rate' - for coefName, coefVal in mech.coeffs[rxnIdx][n].items(): - if coefName == 'Pressure': continue # skip pressure - coefAbbr = get_coef_abbreviation(coefName) + for coefName, coefAbbr in coef_abbreviation.items(): coeffs.append([f'{coefAbbr}_{key}', coefName, mech.coeffs[rxnIdx][n]]) - coeffs_order = [1, 2, 0, 4, 5, 3] # Reorder coeffs into A_high, n_high, Ea_high, A_low + coeffs_order = [1, 2, 0, 4, 5, 3] # order coeffs into A_high, n_high, Ea_high, A_low data.append({'num': rxnIdx, 'eqn': rxn.equation, 'type': rxn_type, 'coeffs': coeffs, 'coeffs_order': coeffs_order}) @@ -161,7 +170,7 @@ def _set_mech_tree(self, rxn_matrix): last_arrhenius = 0 for i, rxn in enumerate(rxn_matrix): print(rxn) - if rxn['type'] != 'Arrhenius': + if rxn['type'] != 'Arrhenius Reaction': if i > 0: tree.setTabOrder(tree.rxn[i-1]['rateBox'], tree.rxn[i]['rateBox']) else: @@ -206,7 +215,7 @@ def set_rate_widget(unc={}): # clear rows of qstandarditem (L1) L1.removeRows(0, L1.rowCount()) - if rxn['type'] in ['Arrhenius', 'Plog Reaction', 'Falloff Reaction']: + if rxn['type'] in ['Arrhenius Reaction', 'Plog Reaction', 'Falloff Reaction']: widget = set_rate_widget(unc={'type': parent.mech.rate_bnds[rxnNum]['type'], 'value': parent.mech.rate_bnds[rxnNum]['value']}) widget.uncValBox.valueChanged.connect(self.update_uncertainties) # no update between F and % @@ -223,7 +232,7 @@ def set_rate_widget(unc={}): conv_type = f'Cantera2{self.mech_tree_type}' coef = self.convert._arrhenius(rxnNum, [coef], conv_type)[0] - if rxn['type'] == 'Arrhenius': + if rxn['type'] == 'Arrhenius Reaction': bnds_key = 'rate' elif rxn['type'] in ['Plog Reaction', 'Falloff Reaction']: if 'high' in coef[0]: @@ -374,7 +383,7 @@ def update_rates(self, rxnNum=None): rxn_rate = parent.series.rates(shock) # update rates from settings if rxn_rate is None: return - num_reac_all = np.sum(parent.mech.gas.reactant_stoich_coeffs(), axis=0) + num_reac_all = np.sum(parent.mech.gas.reactant_stoich_coeffs, axis=0) if rxnNum is not None: if type(rxnNum) in [list, np.ndarray]: @@ -439,7 +448,7 @@ def update_uncertainties(self, event=None, sender=None): for rxnNum in rxnNumRange: # update all rate uncertainties rxn = parent.mech_tree.rxn[rxnNum] - if rxn['rxnType'] not in ['Arrhenius', 'Plog Reaction', 'Falloff Reaction']: # skip if not allowable type + if rxn['rxnType'] not in ['Arrhenius Reaction', 'Plog Reaction', 'Falloff Reaction']: # skip if not allowable type mech.rate_bnds[rxnNum]['opt'] = False continue if 'uncBox' not in rxn: @@ -469,7 +478,6 @@ def update_coef_rate_from_opt(self, coef_opt, x): parent = self.parent() conv_type = 'Cantera2' + self.mech_tree_type - x0 = [] for i, idxDict in enumerate(coef_opt): # set changes to both spinboxes and backend coeffs rxnIdx, coefIdx = idxDict['rxnIdx'], idxDict['coefIdx'] coeffs_key = idxDict['key']['coeffs'] @@ -520,7 +528,7 @@ def update_box_reset_values(self, rxnNum=None): for rxnNum in rxnNumRange: rxn = parent.mech_tree.rxn[rxnNum] - if (rxn['rxnType'] not in ['Arrhenius', 'Plog Reaction', 'Falloff Reaction'] or 'valueBox' not in rxn): continue + if (rxn['rxnType'] not in ['Arrhenius Reaction', 'Plog Reaction', 'Falloff Reaction'] or 'valueBox' not in rxn): continue valBoxes = parent.mech_tree.rxn[rxnNum]['valueBox'] @@ -587,7 +595,7 @@ def _tabExpanded(self, sender_idx, expanded): # set uncboxes to not sender.info['hasExpanded'] = True self._set_mech_widgets(sender) - if sender.info['rxnType'] in ['Arrhenius', 'Plog Reaction', 'Falloff Reaction']: + if sender.info['rxnType'] in ['Arrhenius Reaction', 'Plog Reaction', 'Falloff Reaction']: for box in parent.mech_tree.rxn[rxnNum]['uncBox']: # box.blockSignals(True) box.setValue(-1) @@ -623,7 +631,7 @@ def setCopyRates(self, event): popup_menu.addAction('Reset All', lambda: self._reset_all()) # this causes independent/dependent to not show if right click is not on rxn - if rxn is not None and 'Arrhenius' in rxn['rxnType']: + if rxn is not None and 'Arrhenius Reaction' in rxn['rxnType']: popup_menu.addSeparator() dependentAction = QAction('Set Dependent', checkable=True) @@ -639,7 +647,7 @@ def _reset_all(self): self.run_sim_on_change = False mech = parent.mech for rxn in parent.mech_tree.rxn: - if (rxn['rxnType'] not in ['Arrhenius', 'Plog Reaction', 'Falloff Reaction'] + if (rxn['rxnType'] not in ['Arrhenius Reaction', 'Plog Reaction', 'Falloff Reaction'] or 'valueBox' not in rxn): continue # only reset Arrhenius boxes for box in rxn['valueBox']: @@ -879,7 +887,7 @@ def __init__(self, parent, coef, info, *args, **kwargs): class rxnRate(QWidget): - def __init__(self, parent, info, rxnType='Arrhenius', label='', *args, **kwargs): + def __init__(self, parent, info, rxnType='Arrhenius Reaction', label='', *args, **kwargs): QWidget.__init__(self, parent) self.parent = parent @@ -905,7 +913,7 @@ def __init__(self, parent, info, rxnType='Arrhenius', label='', *args, **kwargs) layout.addItem(spacer, 0, 1) layout.addWidget(self.valueBox, 0, 2) - if rxnType in ['Arrhenius', 'Plog Reaction', 'Falloff Reaction']: + if rxnType in ['Arrhenius Reaction', 'Plog Reaction', 'Falloff Reaction']: info['mainValueBox'] = self.valueBox if 'unc_value' in kwargs and 'unc_type' in kwargs: diff --git a/src/misc_widget.py b/src/misc_widget.py index 36f3e59..72dc95b 100644 --- a/src/misc_widget.py +++ b/src/misc_widget.py @@ -1,5 +1,5 @@ # This file is part of Frhodo. Copyright © 2020, UChicago Argonne, LLC -# and licensed under BSD-3-Clause. See License.txt in the top-level +# and licensed under BSD-3-Clause. See License.txt in the top-level # directory for license and copyright information. import re, sys @@ -7,21 +7,23 @@ from qtpy.QtWidgets import * from qtpy import QtWidgets, QtGui, QtCore from calculate.convert_units import OoM - + # Regular expression to find floats. Match groups are the whole string, the # whole coefficient, the decimal part of the coefficient, and the exponent # part. -_float_re = re.compile(r'(([+-]?\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?)') +_float_re = re.compile(r"(([+-]?\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?)") + def valid_float_string(string): match = _float_re.search(string) return match.groups()[0] == string if match else False - + + class FloatValidator(QtGui.QValidator): def validate(self, string, position): if valid_float_string(string): state = QtGui.QValidator.Acceptable - elif string == "" or string[position-1].lower() in 'e.-+': + elif string == "" or string[position - 1].lower() in "e.-+": state = QtGui.QValidator.Intermediate else: state = QtGui.QValidator.Invalid @@ -30,76 +32,95 @@ def validate(self, string, position): def fixup(self, text): match = _float_re.search(text) return match.groups()[0] if match else "" - + + class ScientificDoubleSpinBox(QtWidgets.QDoubleSpinBox): resetValueChanged = QtCore.Signal(float) + def __init__(self, reset_popup=True, *args, **kwargs): self.validator = FloatValidator() - if 'numFormat' in kwargs: - self.numFormat = kwargs.pop('numFormat') + if "numFormat" in kwargs: + self.numFormat = kwargs.pop("numFormat") else: - self.numFormat = 'g' - + self.numFormat = "g" + self.setStrDecimals(6) # number of decimals displayed super().__init__(*args, **kwargs) self.cb = QApplication.clipboard() self.setKeyboardTracking(False) self.setMinimum(-sys.float_info.max) self.setMaximum(sys.float_info.max) - self.setDecimals(int(np.floor(np.log10(sys.float_info.max)))) # big for setting value + self.setDecimals( + int(np.floor(np.log10(sys.float_info.max))) + ) # big for setting value self.setSingleStep(0.1) self.setSingleIntStep(1) self.setSingleExpStep(0.1) self.setAccelerated(True) # self.installEventFilter(self) - - if 'value' in kwargs: - self.setValue(kwargs['value']) + + if "value" in kwargs: + self.setValue(kwargs["value"]) else: self.setValue(0) - + self._set_reset_value(self.value()) - + if reset_popup: # Set popup self.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) self.customContextMenuRequested.connect(self._popup_menu) - + # Set Shortcuts - shortcut_fcn_pair = [['Ctrl+R', lambda: self._reset()], ['Ctrl+C', lambda: self._copy()], - ['Ctrl+V', lambda: self._paste()]] - for shortcut, fcn in shortcut_fcn_pair: # TODO: need to fix hover shortcuts not working - QShortcut(QtGui.QKeySequence(shortcut), self, activated=fcn, context=QtCore.Qt.WidgetShortcut) - + shortcut_fcn_pair = [ + ["Ctrl+R", lambda: self._reset()], + ["Ctrl+C", lambda: self._copy()], + ["Ctrl+V", lambda: self._paste()], + ] + for ( + shortcut, + fcn, + ) in shortcut_fcn_pair: # TODO: need to fix hover shortcuts not working + QShortcut( + QtGui.QKeySequence(shortcut), + self, + activated=fcn, + context=QtCore.Qt.WidgetShortcut, + ) + # def eventFilter(self, obj, event): # event filter to allow hover shortcuts - # if event.type() == QtCore.QEvent.Enter: - # self.setFocus() - # return True - # elif event.type() == QtCore.QEvent.Leave: - # return False - # else: - # return super().eventFilter(obj, event) - + # if event.type() == QtCore.QEvent.Enter: + # self.setFocus() + # return True + # elif event.type() == QtCore.QEvent.Leave: + # return False + # else: + # return super().eventFilter(obj, event) + def _popup_menu(self, event): popup_menu = QMenu(self) - popup_menu.addAction('Reset', lambda: self._reset(), 'Ctrl+R') + popup_menu.addAction("Reset", lambda: self._reset(), "Ctrl+R") popup_menu.addSeparator() - popup_menu.addAction('Copy', lambda: self._copy(), 'Ctrl+C') - popup_menu.addAction('Paste', lambda: self._paste(), 'Ctrl+V') + popup_menu.addAction("Copy", lambda: self._copy(), "Ctrl+C") + popup_menu.addAction("Paste", lambda: self._paste(), "Ctrl+V") popup_menu.addSeparator() - popup_menu.addAction('Set Reset Value', lambda: self._set_reset_value(self.value())) + popup_menu.addAction( + "Set Reset Value", lambda: self._set_reset_value(self.value()) + ) popup_menu.exec_(QtGui.QCursor.pos()) - + def _reset(self, silent=False): - self.blockSignals(True) # needed because shortcut isn't always signalling valueChanged.emit + self.blockSignals( + True + ) # needed because shortcut isn't always signalling valueChanged.emit self.setValue(self.reset_value) self.blockSignals(False) if not silent: self.valueChanged.emit(self.reset_value) - + def setStrDecimals(self, value: int): self.strDecimals = value - + def setSingleIntStep(self, value: float): self.singleIntStep = value @@ -109,26 +130,28 @@ def setSingleExpStep(self, value: float): def _set_reset_value(self, value): self.reset_value = value self.resetValueChanged.emit(self.reset_value) - + def _copy(self): self.selectAll() cb = self.cb cb.clear(mode=cb.Clipboard) cb.setText(self.textFromValue(self.value()), mode=cb.Clipboard) - + def _paste(self): previous_value = self.text() if self.fixup(self.cb.text()): self.setValue(float(self.fixup(self.cb.text()))) else: self.setValue(float(previous_value)) - + def keyPressEvent(self, event): if event.matches(QtGui.QKeySequence.Paste): self._paste() - - super(ScientificDoubleSpinBox, self).keyPressEvent(event) # don't want to overwrite all shortcuts - + + super(ScientificDoubleSpinBox, self).keyPressEvent( + event + ) # don't want to overwrite all shortcuts + def validate(self, text, position): return self.validator.validate(text, position) @@ -140,40 +163,58 @@ def valueFromText(self, text): def textFromValue(self, value): """Modified form of the 'g' format specifier.""" - if 'g' in self.numFormat: + if "g" in self.numFormat: # if full number showing and number decimals less than str, switch to number decimals - if abs(OoM(value)) < self.strDecimals and self.decimals() < self.strDecimals: - string = "{:.{dec}{numFormat}}".format(value, dec=int(abs(OoM(value)))+1+self.decimals(), numFormat=self.numFormat) + if ( + abs(OoM(value)) < self.strDecimals + and self.decimals() < self.strDecimals + ): + string = "{:.{dec}{numFormat}}".format( + value, + dec=int(abs(OoM(value))) + 1 + self.decimals(), + numFormat=self.numFormat, + ) else: - string = "{:.{dec}{numFormat}}".format(value, dec=self.strDecimals, numFormat=self.numFormat) - elif 'e' in self.numFormat: - string = "{:.{dec}{numFormat}}".format(value, dec=self.strDecimals, numFormat=self.numFormat) + string = "{:.{dec}{numFormat}}".format( + value, dec=self.strDecimals, numFormat=self.numFormat + ) + elif "e" in self.numFormat: + string = "{:.{dec}{numFormat}}".format( + value, dec=self.strDecimals, numFormat=self.numFormat + ) string = re.sub("e(-?)0*(\d+)", r"e\1\2", string.replace("e+", "e")) return string - + def stepBy(self, steps): if self.specialValueText() and self.value() == self.minimum(): text = self.textFromValue(self.minimum()) - else: + else: text = self.cleanText() - + old_val = float(text) - if self.numFormat == 'g' and abs(OoM(old_val)) < self.strDecimals: # my own custom g - val = old_val + self.singleIntStep*steps + if ( + self.numFormat == "g" and abs(OoM(old_val)) < self.strDecimals + ): # my own custom g + val = old_val + self.singleIntStep * steps else: + if self.decimals() < self.strDecimals: + singleStep = 0.1 + else: + singleStep = self.singleStep() + old_OoM = OoM(old_val) - val = old_val + np.power(10, old_OoM)*self.singleExpStep*steps + val = old_val + np.power(10, old_OoM) * self.singleExpStep * steps new_OoM = OoM(val) - if old_OoM > new_OoM: # needed to step down by new amount 1E5 -> 9.9E6 - if self.numFormat == 'g' and abs(new_OoM) < self.strDecimals: - val = old_val + self.singleIntStep*steps + if old_OoM > new_OoM: # needed to step down by new amount 1E5 -> 9.9E6 + if self.numFormat == "g" and abs(new_OoM) < self.strDecimals: + val = old_val + self.singleIntStep * steps else: - val = old_val + np.power(10, new_OoM)*self.singleExpStep*steps + val = old_val + np.power(10, new_OoM) * self.singleExpStep * steps self.setValue(val) - + class SearchComboBox(QComboBox): def __init__(self, parent=None): super(SearchComboBox, self).__init__(parent) @@ -181,7 +222,7 @@ def __init__(self, parent=None): self.setFocusPolicy(QtCore.Qt.StrongFocus) self.setEditable(True) self.setInsertPolicy(QComboBox.NoInsert) - + # add a filter model to filter matching items self.pFilterModel = QtCore.QSortFilterProxyModel(self) self.pFilterModel.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive) @@ -189,36 +230,36 @@ def __init__(self, parent=None): # add a completer, which uses the filter model self.completer = QCompleter(self.pFilterModel, self) - + # always show all (filtered) completions self.completer.setFilterMode(QtCore.Qt.MatchContains) self.completer.setCompletionMode(QCompleter.UnfilteredPopupCompletion) self.completer.setCaseSensitivity(QtCore.Qt.CaseInsensitive) self.setCompleter(self.completer) - + # connect signals self.lineEdit().textEdited.connect(self.pFilterModel.setFilterFixedString) self.lineEdit().editingFinished.connect(self.on_completer_activated) self.completer.activated[str].connect(self.on_completer_activated) - # on selection of an item from the completer, select the corresponding item from combobox + # on selection of an item from the completer, select the corresponding item from combobox def on_completer_activated(self, text=None): if text is None: text = self.lineEdit().text() - + old_idx = self.currentIndex() if text: idx = self.findText(text) - - if idx < 0: # if new text not found, revert to prior text + + if idx < 0: # if new text not found, revert to prior text idx = old_idx - else: # if no text found, revert to prior + else: # if no text found, revert to prior idx = old_idx - + self.setCurrentIndex(idx) self.activated[str].emit(self.itemText(idx)) - # on model change, update the models of the filter and completer as well + # on model change, update the models of the filter and completer as well def setModel(self, model): super().setModel(model) self.pFilterModel.setSourceModel(model) @@ -228,82 +269,89 @@ def setModel(self, model): def setModelColumn(self, column): self.completer.setCompletionColumn(column) self.pFilterModel.setFilterKeyColumn(column) - super().setModelColumn(column) - + super().setModelColumn(column) + def setNewStyleSheet(self, down_arrow_path): fontInfo = QtGui.QFontInfo(self.font()) family = fontInfo.family() font_size = fontInfo.pixelSize() - - # stylesheet because of a border on the arrow that I dislike - stylesheet = ["QComboBox { color: black; font-size: " + str(font_size) + "px;", + + # stylesheet because of a border on the arrow that I dislike + stylesheet = [ + "QComboBox { color: black; font-size: " + str(font_size) + "px;", "font-family: " + family + "; margin: 0px 0px 1px 1px; border: 0px;", - "padding: 1px 0px 0px 0px;}", # This (useless) line resolves a bug with the font color - "QComboBox::drop-down { border: 0px; }" # Replaces the whole arrow of the combo box + "padding: 1px 0px 0px 0px;}", # This (useless) line resolves a bug with the font color + "QComboBox::drop-down { border: 0px; }" # Replaces the whole arrow of the combo box "QComboBox::down-arrow { image: url(" + down_arrow_path + ");", - "width: 14px; height: 14px; }"] - - self.setStyleSheet(' '.join(stylesheet)) + "width: 14px; height: 14px; }", + ] + + self.setStyleSheet(" ".join(stylesheet)) - -class ItemSearchComboBox(SearchComboBox): # track items in itemList + +class ItemSearchComboBox(SearchComboBox): # track items in itemList def __init__(self, parent=None): super().__init__(parent) self.itemList = [] self.completer.activated.connect(self.on_completer_activated) - + def addItem(self, item): super().addItem(item) self.itemList.append(item) - + def removeItem(self, idx): super().removeItem(idx) del self.itemList[idx] - + def clear(self): super().clear() self.itemList = [] - - + + class CheckableSearchComboBox(ItemSearchComboBox): def __init__(self, parent=None): super().__init__(parent) self.setView(QTreeView()) self.view().setHeaderHidden(True) self.view().setIndentation(0) - - self.view().header().setMinimumSectionSize(0) # set minimum to 0 + + self.view().header().setMinimumSectionSize(0) # set minimum to 0 self.setSizeAdjustPolicy(QComboBox.AdjustToContents) - - #self.setModelColumn(1) # sets column for text to the second column + + # self.setModelColumn(1) # sets column for text to the second column self.view().setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded) - + self.cb = parent.clipboard # Set popup self.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) self.customContextMenuRequested.connect(lambda event: self._popup_menu(event)) - + # Set Shortcuts - shortcut_fcn_pair = [['Ctrl+R', lambda: self._reset()]] + shortcut_fcn_pair = [["Ctrl+R", lambda: self._reset()]] for shortcut, fcn in shortcut_fcn_pair: - QShortcut(QtGui.QKeySequence(shortcut), self, activated=fcn, context=QtCore.Qt.WidgetShortcut) + QShortcut( + QtGui.QKeySequence(shortcut), + self, + activated=fcn, + context=QtCore.Qt.WidgetShortcut, + ) # Connect Signals self.view().pressed.connect(self.handleItemPressed) - + def handleItemPressed(self, index): self.setCurrentIndex(index.row()) self.hidePopup() def addItem(self, item, model=None): super().addItem(item) - - checkbox_item = self.model().item(self.count()-1, 0) + + checkbox_item = self.model().item(self.count() - 1, 0) checkbox_item.setFlags(QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsEnabled) checkbox_item.setCheckState(QtCore.Qt.Unchecked) - #self.view().resizeColumnToContents(0) + # self.view().resizeColumnToContents(0) def addItems(self, items): for item in items: @@ -316,53 +364,61 @@ def itemChecked(self, index): def sizeHint(self): base = super().sizeHint() height = base.height() - + width = 0 - if type(self.view()) is QTreeView: # if the view is a QTreeView - for i in range(self.view().header().count()): # add size hint for each column + if type(self.view()) is QTreeView: # if the view is a QTreeView + for i in range( + self.view().header().count() + ): # add size hint for each column width += self.view().sizeHintForColumn(i) else: width += self.view().sizeHintForColumn(0) - - if self.count() > self.maxVisibleItems(): # if scrollbar visible - width += self.view().verticalScrollBar().sizeHint().width() # add width of scrollbar - - width += 2 # TODO: do this properly, I think this is padding - + + if self.count() > self.maxVisibleItems(): # if scrollbar visible + width += ( + self.view().verticalScrollBar().sizeHint().width() + ) # add width of scrollbar + + width += 2 # TODO: do this properly, I think this is padding + return QtCore.QSize(width, height) - + def _popup_menu(self, event): popup_menu = QMenu(self) - popup_menu.addAction('Reset', lambda: self._reset_checkboxes(), 'Ctrl+R') + popup_menu.addAction("Reset", lambda: self._reset_checkboxes(), "Ctrl+R") popup_menu.addSeparator() - popup_menu.addAction('Copy', lambda: self._copy(), 'Ctrl+C') - popup_menu.addAction('Paste', lambda: self._paste(), 'Ctrl+V') + popup_menu.addAction("Copy", lambda: self._copy(), "Ctrl+C") + popup_menu.addAction("Paste", lambda: self._paste(), "Ctrl+V") popup_menu.exec_(QtGui.QCursor.pos()) - + def _reset_checkboxes(self): for i in range(self.count()): item = self.model().item(i, 0) if self.itemChecked(i): - item.setCheckState(QtCore.Qt.Unchecked) # uncheck all - + item.setCheckState(QtCore.Qt.Unchecked) # uncheck all + def _copy(self): text = str(self.currentText()) self.cb.clear() - self.cb.setText(text) # tab for new column, new line for new row - + self.cb.setText(text) # tab for new column, new line for new row + def _paste(self): self.lineEdit().setText(self.cb.text()) + class MessageWindow(QWidget): def __init__(self, parent, text): super().__init__(parent=parent) - n = 7 # Margin size + n = 7 # Margin size layout = QVBoxLayout() - layout.setContentsMargins(n+1, n, n+1, n) + layout.setContentsMargins(n + 1, n, n + 1, n) self.label = QLabel(text) layout.addWidget(self.label) self.setLayout(layout) - - self.setWindowFlags(QtCore.Qt.Window | QtCore.Qt.CustomizeWindowHint | QtCore.Qt.FramelessWindowHint) - self.show() \ No newline at end of file + self.setWindowFlags( + QtCore.Qt.Window + | QtCore.Qt.CustomizeWindowHint + | QtCore.Qt.FramelessWindowHint + ) + self.show() diff --git a/src/options_panel_widgets.py b/src/options_panel_widgets.py index 2d9b71d..76a9dd9 100644 --- a/src/options_panel_widgets.py +++ b/src/options_panel_widgets.py @@ -1,5 +1,5 @@ # This file is part of Frhodo. Copyright © 2020, UChicago Argonne, LLC -# and licensed under BSD-3-Clause. See License.txt in the top-level +# and licensed under BSD-3-Clause. See License.txt in the top-level # directory for license and copyright information. import pathlib, os, sys @@ -7,7 +7,7 @@ from scipy.optimize import minimize import nlopt import mech_widget, misc_widget, thermo_widget, series_viewer_widget, save_output -from calculate import shock_fcns +from calculate import shock_fcns from calculate.optimize.mech_optimize import Multithread_Optimize from calculate.convert_units import OoM from settings import double_sigmoid @@ -20,69 +20,87 @@ class Initialize(QtCore.QObject): def __init__(self, parent): super().__init__(parent) - parent.log = Log(parent.option_tab_widget, parent.log_box, - parent.clear_log_button, parent.copy_log_button) - + parent.log = Log( + parent.option_tab_widget, + parent.log_box, + parent.clear_log_button, + parent.copy_log_button, + ) + # Setup and Connect Directory Widgets parent.directory = Directories(parent) - + # Connect and Reorder settings boxes box_list = [parent.shock_choice_box, parent.time_offset_box] - + self._set_user_settings_boxes(box_list) - + # Create toolbar experiment number spinbox parent.toolbar_shock_choice_box = QtWidgets.QSpinBox() parent.toolbar_shock_choice_box.setKeyboardTracking(False) - parent.toolbar_shock_choice_box.label = QtWidgets.QAction('Shock # ') + parent.toolbar_shock_choice_box.label = QtWidgets.QAction("Shock # ") parent.toolbar_shock_choice_box.label.setEnabled(False) - parent.toolbar_shock_choice_box.setSizePolicy(QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum) - - parent.toolBar.insertAction(parent.action_Run, parent.toolbar_shock_choice_box.label) + parent.toolbar_shock_choice_box.setSizePolicy( + QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum + ) + + parent.toolBar.insertAction( + parent.action_Run, parent.toolbar_shock_choice_box.label + ) parent.toolBar.insertWidget(parent.action_Run, parent.toolbar_shock_choice_box) parent.toolBar.insertSeparator(parent.action_Run) - parent.toolBar.setStyleSheet("QToolButton:disabled { color: black } " + - "QToolButton:enabled { color: black }") # alter color + parent.toolBar.setStyleSheet( + "QToolButton:disabled { color: black } " + + "QToolButton:enabled { color: black }" + ) # alter color # Set twinned boxes - self.twin = [[parent.time_offset_box, parent.time_offset_twin_box], # main box first - [parent.shock_choice_box, parent.toolbar_shock_choice_box]] + self.twin = [ + [parent.time_offset_box, parent.time_offset_twin_box], # main box first + [parent.shock_choice_box, parent.toolbar_shock_choice_box], + ] for boxes in self.twin: for box in boxes: box.twin = boxes - box.setValue(boxes[0].value()) # set all values to be main + box.setValue(boxes[0].value()) # set all values to be main box.setMinimum(boxes[0].minimum()) box.setMaximum(boxes[0].maximum()) - if box is not parent.shock_choice_box: # prevent double signals, boxes changed in settings - box.valueChanged.connect(self.twin_change) + if ( + box is not parent.shock_choice_box + ): # prevent double signals, boxes changed in settings + box.valueChanged.connect(self.twin_change) # Connect optimization widgets parent.optimization_settings = Optimization(parent) - + # Create list of shock boxes (units and values) and connect them to function parent.shock_widgets = Shock_Settings(parent) - + # Setup tables parent.mix = Mix_Table(parent) parent.weight = Weight_Parameters_Table(parent) parent.exp_unc = Uncertainty_Parameters_Table(parent) - + # Setup reactor settings parent.reactor_settings = Reactor_Settings(parent) - + # Setup and Connect Tree Widgets Tables_Tab(parent) - + # Optimize Widgets parent.save = save_output.Save(parent) - parent.optimize = Multithread_Optimize(parent) - parent.run_optimize_button.clicked.connect(lambda: parent.optimize.start_threads()) - + parent.optimize = Multithread_Optimize(parent) + parent.run_optimize_button.clicked.connect( + lambda: parent.optimize.start_threads() + ) + def _set_user_settings_boxes(self, box_list): parent = self.parent() box_list[0].valueChanged.connect(parent.shock_choice_changed) for box in box_list[1:]: - if isinstance(box, QtWidgets.QDoubleSpinBox) or isinstance(box, QtWidgets.QSpinBox): + if isinstance(box, QtWidgets.QDoubleSpinBox) or isinstance( + box, QtWidgets.QSpinBox + ): box.valueChanged.connect(parent.update_user_settings) elif isinstance(box, QtWidgets.QComboBox): box.currentIndexChanged[int].connect(parent.update_user_settings) @@ -90,27 +108,27 @@ def _set_user_settings_boxes(self, box_list): box.stateChanged.connect(parent.update_user_settings) elif isinstance(box, QtWidgets.QTextEdit): box.textChanged.connect(parent.update_user_settings) - - box_list[0], box_list[1] = box_list[1], box_list[0] # switch list order - for i in range(len(box_list)-1): # Sets the box order - parent.setTabOrder(box_list[i], box_list[i+1]) - + + box_list[0], box_list[1] = box_list[1], box_list[0] # switch list order + for i in range(len(box_list) - 1): # Sets the box order + parent.setTabOrder(box_list[i], box_list[i + 1]) + def twin_change(self, event): - if self.sender() is self.sender().twin[0]: # if box is main, update others + if self.sender() is self.sender().twin[0]: # if box is main, update others for box in self.sender().twin: if box is not self.sender(): - box.blockSignals(True) # stop changing text from signaling + box.blockSignals(True) # stop changing text from signaling box.setValue(event) - box.blockSignals(False) # allow signals again - else: - self.sender().twin[0].setValue(event) # if box isn't main, update main - + box.blockSignals(False) # allow signals again + else: + self.sender().twin[0].setValue(event) # if box isn't main, update main + class Directories(QtCore.QObject): def __init__(self, parent): super().__init__(parent) parent = self.parent() - + parent.exp_main_box.textChanged.connect(self.select) parent.exp_main_button.clicked.connect(self.select) parent.mech_main_box.textChanged.connect(self.select) @@ -120,172 +138,211 @@ def __init__(self, parent): parent.path_file_box.textChanged.connect(self.select) parent.path_file_load_button.clicked.connect(self.select) parent.path_file_save_button.clicked.connect(self.save) - + parent.exp_series_name_box.textChanged.connect(parent.update_user_settings) - parent.mech_select_comboBox.activated[str].connect(parent.load_mech) # call function if opened, even if not changed + parent.mech_select_comboBox.activated[str].connect( + parent.load_mech + ) # call function if opened, even if not changed parent.use_thermo_file_box.stateChanged.connect(parent.load_mech) - + parent.load_full_series_box.stateChanged.connect(self.set_load_full_set) self.set_load_full_set() - - self.x_icon = QtGui.QPixmap(str(parent.path['graphics']/'x_icon.png')) - self.check_icon = QtGui.QPixmap(str(parent.path['graphics']/'check_icon.png')) + + self.x_icon = QtGui.QPixmap(str(parent.path["graphics"] / "x_icon.png")) + self.check_icon = QtGui.QPixmap(str(parent.path["graphics"] / "check_icon.png")) self.update_icons() - + def preset(self, selection): parent = self.parent - - parent.preset_settings_choice.setCurrentIndex(parent.preset_settings_choice.findText(selection)) - parent.preset_box.setPlainText(parent.path['Settings'][selection]) - parent.user_settings.load(parent.path['Settings'][selection]) - + + parent.preset_settings_choice.setCurrentIndex( + parent.preset_settings_choice.findText(selection) + ) + parent.preset_box.setPlainText(parent.path["Settings"][selection]) + parent.user_settings.load(parent.path["Settings"][selection]) + def select(self): parent = self.parent() - - key = '_'.join(self.sender().objectName().split("_")[:-1]) - if 'path_file_load' in key: - key = 'path_file' - dialog = 'load' + + key = "_".join(self.sender().objectName().split("_")[:-1]) + if "path_file_load" in key: + key = "path_file" + dialog = "load" else: - dialog = 'select' - + dialog = "select" + type = self.sender().objectName().split("_")[-1] - if 'button' in type: - description_text = eval('parent.' + key + '_box.placeholderText()') - initial_dir = pathlib.Path.home() # set user as initial folder - if dialog in 'select': + if "button" in type: + description_text = eval("parent." + key + "_box.placeholderText()") + initial_dir = pathlib.Path.home() # set user as initial folder + if dialog in "select": # if this path exists, set previous folder as initial folder - if key in parent.path and parent.path[key].exists() and len(parent.path[key].parts) > 1: + if ( + key in parent.path + and parent.path[key].exists() + and len(parent.path[key].parts) > 1 + ): initial_dir = parent.path[key].parents[0] - - path = QFileDialog.getExistingDirectory(parent, description_text, str(initial_dir)) - elif dialog in 'load': + + path = QFileDialog.getExistingDirectory( + parent, description_text, str(initial_dir) + ) + elif dialog in "load": if key in parent.path and len(parent.path[key].parts) > 1: - initial_dir = parent.path[key].parents[0] # set path_file as initial folder - + initial_dir = parent.path[key].parents[ + 0 + ] # set path_file as initial folder + # if initial_dir doesn't exist or can't be accessed, choose source folder - if not os.access(parent.path[key], os.R_OK) or not initial_dir.is_dir(): - initial_dir = parent.path['main'] - - path = QFileDialog.getOpenFileName(parent, description_text, str(initial_dir), 'ini (*.ini)') + if ( + not os.access(parent.path[key], os.R_OK) + or not initial_dir.is_dir() + ): + initial_dir = parent.path["main"] + + path = QFileDialog.getOpenFileName( + parent, description_text, str(initial_dir), "ini (*.ini)" + ) path = path[0] - + if path: - path = pathlib.Path(path).resolve() # convert to absolute path - - if dialog in 'load': # if load is selected and path is valid + path = pathlib.Path(path).resolve() # convert to absolute path + + if dialog in "load": # if load is selected and path is valid parent.path_set.load_dir_file(path) - - eval('parent.' + key + '_box.setPlainText(str(path))') + + eval("parent." + key + "_box.setPlainText(str(path))") parent.path[key] = path - parent.user_settings.save(save_all = False) + parent.user_settings.save(save_all=False) - elif 'box' in type: + elif "box" in type: text = self.sender().toPlainText() + def fn(parent, text): return self.sender().setPlainText(text) - + self.QTextEdit_function(self.sender(), fn, parent, text) - parent.path[key] = pathlib.Path(text) - + parent.path[key] = pathlib.Path(text) + # select will modify box, this section is under if box to prevent double calling self.update_icons() - if 'mech_main' in key and 'mech_main' not in self.invalid: # Mech path changed: update mech combobox + if ( + "mech_main" in key and "mech_main" not in self.invalid + ): # Mech path changed: update mech combobox parent.path_set.set_watch_dir() # update watched directory parent.path_set.mech() # if no mechs found, do not try to load, return - if parent.mech_select_comboBox.count() == 0: return - + if parent.mech_select_comboBox.count() == 0: + return + # if mech not in current path load mech - if 'mech' not in parent.path: + if "mech" not in parent.path: parent.load_mech() - else: # load mech if path or mech name has changed + else: # load mech if path or mech name has changed mech_name = str(parent.mech_select_comboBox.currentText()) - mech_name_changed = mech_name != parent.path['mech'].name - - mech_path = parent.path['mech_main'] - mech_path_changed = mech_path != parent.path['mech'].parents[0] - + mech_name_changed = mech_name != parent.path["mech"].name + + mech_path = parent.path["mech_main"] + mech_path_changed = mech_path != parent.path["mech"].parents[0] + if mech_name_changed or mech_path_changed: parent.load_mech() - - if parent.mech.isLoaded: # this is causing the mix table to be blanked out + + if ( + parent.mech.isLoaded + ): # this is causing the mix table to be blanked out parent.mix.update_species() # parent.mix.setItems(parent.mech.gas.species_names) - elif 'exp_main' in key and 'exp_main' not in self.invalid: # Exp path changed: reload list of shocks and load data - series_name = parent.exp_series_name_box.text() - if parent.exp_main_box.toPlainText() not in parent.series.path: # if series already exists, don't create new + elif ( + "exp_main" in key and "exp_main" not in self.invalid + ): # Exp path changed: reload list of shocks and load data + series_name = parent.exp_series_name_box.text() + if ( + parent.exp_main_box.toPlainText() not in parent.series.path + ): # if series already exists, don't create new parent.series.add_series() - + if not series_name or series_name in parent.series.name: - exp_path = parent.path['exp_main'] + exp_path = parent.path["exp_main"] parent.exp_series_name_box.setText(str(exp_path.name)) else: parent.series.change_series() - parent.exp_series_name_box.setText(parent.display_shock['series_name']) - + parent.exp_series_name_box.setText( + parent.display_shock["series_name"] + ) + if not series_name: parent.exp_series_name_box.setText(str(exp_path.name)) def save(self): parent = self.parent() - - description_text = 'Save Directory Settings' - default_location = str(parent.path['path_file']) - path = QFileDialog.getSaveFileName(parent, description_text, default_location, - "Configuration file (*.ini)") - - if path[0] and 'exp_main' not in self.invalid: + + description_text = "Save Directory Settings" + default_location = str(parent.path["path_file"]) + path = QFileDialog.getSaveFileName( + parent, description_text, default_location, "Configuration file (*.ini)" + ) + + if path[0] and "exp_main" not in self.invalid: parent.path_set.save_dir_file(path[0]) parent.path_file_box.setPlainText(path[0]) - parent.user_settings.save(save_all = False) + parent.user_settings.save(save_all=False) elif self.invalid: - parent.log.append('Could not save directory settings:\nInvalid directory found') - - + parent.log.append( + "Could not save directory settings:\nInvalid directory found" + ) + def QTextEdit_function(self, object, fn, *args, **kwargs): - object.blockSignals(True) # stop changing text from signalling - old_position = object.textCursor().position() # find old cursor position + object.blockSignals(True) # stop changing text from signalling + old_position = object.textCursor().position() # find old cursor position fn(*args, **kwargs) - - cursor = object.textCursor() # create new cursor (I don't know why) - cursor.setPosition(old_position) # move new cursor to old pos - object.setTextCursor(cursor) # switch current cursor with newly made - object.blockSignals(False) # allow signals again - - def update_icons(self, invalid=[]): # This also checks if paths are valid + + cursor = object.textCursor() # create new cursor (I don't know why) + cursor.setPosition(old_position) # move new cursor to old pos + object.setTextCursor(cursor) # switch current cursor with newly made + object.blockSignals(False) # allow signals again + + def update_icons(self, invalid=[]): # This also checks if paths are valid parent = self.parent() - - key_names = ['path_file', 'exp_main', 'mech_main', 'sim_main'] - + + key_names = ["path_file", "exp_main", "mech_main", "sim_main"] + self.invalid = deepcopy(invalid) for key in key_names: - if key == 'path_file': - if key in parent.path and os.access(parent.path[key], os.R_OK) and parent.path[key].is_file(): - eval('parent.' + key + '_label.setPixmap(self.check_icon)') + if key == "path_file": + if ( + key in parent.path + and os.access(parent.path[key], os.R_OK) + and parent.path[key].is_file() + ): + eval("parent." + key + "_label.setPixmap(self.check_icon)") else: - eval('parent.' + key + '_label.setPixmap(self.x_icon)') + eval("parent." + key + "_label.setPixmap(self.x_icon)") else: - if key in self.invalid: - eval('parent.' + key + '_label.setPixmap(self.x_icon)') - elif (key in parent.path and os.access(parent.path[key], os.R_OK) - and parent.path[key].is_dir() and str(parent.path[key]) != '.'): - - eval('parent.' + key + '_label.setPixmap(self.check_icon)') + eval("parent." + key + "_label.setPixmap(self.x_icon)") + elif ( + key in parent.path + and os.access(parent.path[key], os.R_OK) + and parent.path[key].is_dir() + and str(parent.path[key]) != "." + ): + eval("parent." + key + "_label.setPixmap(self.check_icon)") else: - if key != 'sim_main': # not invalid if sim folder missing, can create later + if ( + key != "sim_main" + ): # not invalid if sim folder missing, can create later self.invalid.append(key) - eval('parent.' + key + '_label.setPixmap(self.x_icon)') - eval('parent.' + key + '_label.show()') - + eval("parent." + key + "_label.setPixmap(self.x_icon)") + eval("parent." + key + "_label.show()") + def set_load_full_set(self, event=None): parent = self.parent() parent.load_full_series = parent.load_full_series_box.isChecked() if event: parent.series.load_full_series() # parent.series_viewer._update(load_full_series = parent.load_full_series) - + class Shock_Settings(QtCore.QObject): def __init__(self, parent): @@ -293,129 +350,150 @@ def __init__(self, parent): self._set_shock_boxes() self.convert_units = self.parent().convert_units self.error_msg = [] - + def _set_shock_boxes(self): parent = self.parent() - - shock_var_list = ['T1', 'P1', 'u1', 'T2', 'P2', 'T5', 'P5'] + + shock_var_list = ["T1", "P1", "u1", "T2", "P2", "T5", "P5"] shock_box_list = [] for shock_var in shock_var_list: - value_box = eval('parent.' + shock_var + '_value_box') - unit_box = eval('parent.' + shock_var + '_units_box') + value_box = eval("parent." + shock_var + "_value_box") + unit_box = eval("parent." + shock_var + "_units_box") value_box.valueChanged.connect(self._shock_value_changed) - unit_box.currentIndexChanged[str].connect(lambda: self._shock_unit_changed()) + unit_box.currentIndexChanged[str].connect( + lambda: self._shock_unit_changed() + ) shock_box_list.append(unit_box) shock_box_list.append(value_box) - + # Reorder tab list for shock boxes - for i in range(len(shock_box_list)-1): # Sets the box order - parent.setTabOrder(shock_box_list[i], shock_box_list[i+1]) - + for i in range(len(shock_box_list) - 1): # Sets the box order + parent.setTabOrder(shock_box_list[i], shock_box_list[i + 1]) + def set_shock_value_box(self, var_type): parent = self.parent() - - unit = eval('str(parent.' + var_type + '_units_box.currentText())') + + unit = eval("str(parent." + var_type + "_units_box.currentText())") value = parent.display_shock[var_type] - minimum_value = self.convert_units(0.05, unit, unit_dir='out') - display_value = self.convert_units(value, unit, unit_dir='out') + minimum_value = self.convert_units(0.05, unit, unit_dir="out") + display_value = self.convert_units(value, unit, unit_dir="out") if np.isnan(display_value): display_value = 0 - - eval('parent.' + var_type + '_value_box.blockSignals(True)') - eval('parent.' + var_type + '_value_box.setMinimum(' + str(minimum_value) + ')') - eval('parent.' + var_type + '_value_box.setValue(' + str(display_value) + ')') - eval('parent.' + var_type + '_value_box.blockSignals(False)') - + + eval("parent." + var_type + "_value_box.blockSignals(True)") + eval("parent." + var_type + "_value_box.setMinimum(" + str(minimum_value) + ")") + eval("parent." + var_type + "_value_box.setValue(" + str(display_value) + ")") + eval("parent." + var_type + "_value_box.blockSignals(False)") + def _shock_value_changed(self, event): parent = self.parent() - var_type = self.sender().objectName().split('_')[0] - + var_type = self.sender().objectName().split("_")[0] + # Get unit type and convert to SIM units - units = eval('str(parent.' + var_type + '_units_box.currentText())') - parent.display_shock[var_type] = self.convert_units(event, units, unit_dir = 'in') - + units = eval("str(parent." + var_type + "_units_box.currentText())") + parent.display_shock[var_type] = self.convert_units(event, units, unit_dir="in") + self.solve_postshock(var_type) - + def _shock_unit_changed(self): - if self.sender() is None: return + if self.sender() is None: + return parent = self.parent() - var_type = self.sender().objectName().split('_')[0] - + var_type = self.sender().objectName().split("_")[0] + # Update spinbox self.set_shock_value_box(var_type) - parent.plot.signal.update_info_text(redraw=True) # update info text box - + parent.plot.signal.update_info_text(redraw=True) # update info text box + def solve_postshock(self, var_type): parent = self.parent() print2log = True - if parent.path_set.loading_dir_file and len(parent.series.current['species_alias']) > 0: + if ( + parent.path_set.loading_dir_file + and len(parent.series.current["species_alias"]) > 0 + ): print2log = False - if not hasattr(parent.mech.gas, 'species_names'): # Check mechanism is loaded + if not hasattr(parent.mech.gas, "species_names"): # Check mechanism is loaded return - + # Check that the variables exist to calculate post shock conditions - IC = [parent.display_shock[key] for key in ['T1', 'P1']] - if not np.isnan(IC).any() and len(parent.display_shock['thermo_mix']) > 0: # if T1, P1, thermo_mix all valid - IC = [parent.display_shock[key] for key in ['u1', 'T2', 'P2', 'T5', 'P5']] - nonzero_count = np.count_nonzero(~np.isnan(IC)) # count existing values of secondary IC's + IC = [parent.display_shock[key] for key in ["T1", "P1"]] + if ( + not np.isnan(IC).any() and len(parent.display_shock["thermo_mix"]) > 0 + ): # if T1, P1, thermo_mix all valid + IC = [parent.display_shock[key] for key in ["u1", "T2", "P2", "T5", "P5"]] + nonzero_count = np.count_nonzero( + ~np.isnan(IC) + ) # count existing values of secondary IC's if nonzero_count == 0: - self.error_msg.append('Not enough shock variables to calculate postshock conditions') + self.error_msg.append( + "Not enough shock variables to calculate postshock conditions" + ) else: - self.error_msg.append('Not enough shock variables to calculate postshock conditions') - - for species in parent.display_shock['thermo_mix']: + self.error_msg.append( + "Not enough shock variables to calculate postshock conditions" + ) + + for species in parent.display_shock["thermo_mix"]: if species not in parent.mech.gas.species_names: - self.error_msg.append('Species: {:s} is not in the mechanism'.format(species)) + self.error_msg.append( + "Species: {:s} is not in the mechanism".format(species) + ) if len(self.error_msg) > 0: - if print2log: # do not print to log if loading_dir_file + if print2log: # do not print to log if loading_dir_file for err in self.error_msg: parent.log.append(err) - - self.error_msg = [] # reset error message and return + + self.error_msg = [] # reset error message and return return - + # Setup variables to be sent to shock solver # Assume T1, mix + variables from selected zone are known variables - shock_vars = {'T1': parent.display_shock['T1'], 'mix': parent.display_shock['thermo_mix']} - if '1' in var_type: - shock_vars['P1'] = parent.display_shock['P1'] - shock_vars['u1'] = parent.display_shock['u1'] - elif '2' in var_type: - shock_vars['T2'] = parent.display_shock['T2'] - shock_vars['P2'] = parent.display_shock['P2'] - elif '5' in var_type: - shock_vars['T5'] = parent.display_shock['T5'] - shock_vars['P5'] = parent.display_shock['P5'] - + shock_vars = { + "T1": parent.display_shock["T1"], + "mix": parent.display_shock["thermo_mix"], + } + if "1" in var_type: + shock_vars["P1"] = parent.display_shock["P1"] + shock_vars["u1"] = parent.display_shock["u1"] + elif "2" in var_type: + shock_vars["T2"] = parent.display_shock["T2"] + shock_vars["P2"] = parent.display_shock["P2"] + elif "5" in var_type: + shock_vars["T5"] = parent.display_shock["T5"] + shock_vars["P5"] = parent.display_shock["P5"] + # Solve for new values shock = shock_fcns.Properties(parent.mech.gas, shock_vars, parent=parent) self.success = shock.success - + if shock.success: parent.log._blink(False) else: return - + # Update new values and run sim # Remove set shock_vars - vars = list(set(shock_vars.keys())^set(['u1', 'T1', 'P1', 'T2', 'P2', 'T5', 'P5'])) + vars = list( + set(shock_vars.keys()) ^ set(["u1", "T1", "P1", "T2", "P2", "T5", "P5"]) + ) for var in vars: parent.display_shock[var] = shock.res[var] self.set_shock_value_box(var) - + # Set reactor conditions - parent.series.set('zone', parent.display_shock['zone']) - - parent.display_shock['u2'] = shock.res['u2'] - parent.display_shock['rho1'] = shock.res['rho1'] - - parent.tree.update_rates() # Updates the rate constants - parent.tree.update_uncertainties() # update rate constants uncertainty + parent.series.set("zone", parent.display_shock["zone"]) + + parent.display_shock["u2"] = shock.res["u2"] + parent.display_shock["rho1"] = shock.res["rho1"] + + parent.tree.update_rates() # Updates the rate constants + parent.tree.update_uncertainties() # update rate constants uncertainty parent.run_single() - + class Reactor_Settings(QtCore.QObject): def __init__(self, parent): @@ -423,16 +501,25 @@ def __init__(self, parent): self._set_reactor_boxes() self.update_reactor_choice(event=None) self.update_reactor_variables(event=None) - + def _set_reactor_boxes(self): parent = prnt = self.parent() - - boxes = [prnt.solve_energy_box, prnt.frozen_comp_box, prnt.end_time_units_box, - prnt.end_time_value_box, prnt.ODE_solver_box, prnt.sim_interp_factor_box, - prnt.ODE_rtol_box, prnt.ODE_atol_box] - + + boxes = [ + prnt.solve_energy_box, + prnt.frozen_comp_box, + prnt.end_time_units_box, + prnt.end_time_value_box, + prnt.ODE_solver_box, + prnt.sim_interp_factor_box, + prnt.ODE_rtol_box, + prnt.ODE_atol_box, + ] + for box in boxes: - if isinstance(box, QtWidgets.QDoubleSpinBox) or isinstance(box, QtWidgets.QSpinBox): + if isinstance(box, QtWidgets.QDoubleSpinBox) or isinstance( + box, QtWidgets.QSpinBox + ): box.valueChanged.connect(self.update_reactor_variables) elif isinstance(box, QtWidgets.QComboBox): box.currentIndexChanged[int].connect(self.update_reactor_variables) @@ -440,132 +527,154 @@ def _set_reactor_boxes(self): box.stateChanged.connect(self.update_reactor_variables) elif isinstance(box, QtWidgets.QTextEdit): box.textChanged.connect(self.update_reactor_variables) - - prnt.reactor_select_box.currentIndexChanged[str].connect(self.update_reactor_choice) - + + prnt.reactor_select_box.currentIndexChanged[str].connect( + self.update_reactor_choice + ) + def update_reactor_variables(self, event=None): parent = self.parent() - - parent.var['reactor']['solve_energy'] = parent.solve_energy_box.isChecked() - parent.var['reactor']['frozen_comp'] = parent.frozen_comp_box.isChecked() - + + parent.var["reactor"]["solve_energy"] = parent.solve_energy_box.isChecked() + parent.var["reactor"]["frozen_comp"] = parent.frozen_comp_box.isChecked() + # Set Simulation time - if 'μs' in parent.end_time_units_box.currentText(): - t_unit_conv = parent.var['reactor']['t_unit_conv'] = 1E-6 - elif 'ms' in parent.end_time_units_box.currentText(): - t_unit_conv = parent.var['reactor']['t_unit_conv'] = 1E-3 - elif 's' in parent.end_time_units_box.currentText(): - t_unit_conv = parent.var['reactor']['t_unit_conv'] = 1 - + if "μs" in parent.end_time_units_box.currentText(): + t_unit_conv = parent.var["reactor"]["t_unit_conv"] = 1e-6 + elif "ms" in parent.end_time_units_box.currentText(): + t_unit_conv = parent.var["reactor"]["t_unit_conv"] = 1e-3 + elif "s" in parent.end_time_units_box.currentText(): + t_unit_conv = parent.var["reactor"]["t_unit_conv"] = 1 + t_unit = parent.end_time_units_box.currentText() - parent.time_offset_box.setSuffix(' ' + t_unit) - - parent.var['reactor']['ode_solver'] = parent.ODE_solver_box.currentText() - parent.var['reactor']['ode_rtol'] = 10**parent.ODE_rtol_box.value() - parent.var['reactor']['ode_atol'] = 10**parent.ODE_atol_box.value() - parent.var['reactor']['t_end'] = parent.end_time_value_box.value()*t_unit_conv - parent.var['reactor']['sim_interp_factor'] = parent.sim_interp_factor_box.value() - + parent.time_offset_box.setSuffix(" " + t_unit) + + parent.var["reactor"]["ode_solver"] = parent.ODE_solver_box.currentText() + parent.var["reactor"]["ode_rtol"] = 10 ** parent.ODE_rtol_box.value() + parent.var["reactor"]["ode_atol"] = 10 ** parent.ODE_atol_box.value() + parent.var["reactor"]["t_end"] = parent.end_time_value_box.value() * t_unit_conv + parent.var["reactor"][ + "sim_interp_factor" + ] = parent.sim_interp_factor_box.value() + if event is not None: sender = self.sender().objectName() parent.run_single() - + # if 'time_offset' in sender and hasattr(self, 'SIM'): # Don't rerun SIM if it exists - # if hasattr(self.SIM, 'independent_var') and hasattr(self.SIM, 'observable'): - # self.plot.signal.update_sim(self.SIM.independent_var, self.SIM.observable) + # if hasattr(self.SIM, 'independent_var') and hasattr(self.SIM, 'observable'): + # self.plot.signal.update_sim(self.SIM.independent_var, self.SIM.observable) # elif any(x in sender for x in ['end_time', 'sim_interp_factor', 'ODE_solver', 'rtol', 'atol']): - # self.run_single() + # self.run_single() # elif self.display_shock['exp_data'].size > 0: # If exp_data exists - # self.plot.signal.update(update_lim=False) - # self.plot.signal.canvas.draw() - + # self.plot.signal.update(update_lim=False) + # self.plot.signal.canvas.draw() + def update_reactor_choice(self, event=None): parent = self.parent() - - parent.var['reactor']['name'] = parent.reactor_select_box.currentText() - parent.plot.observable_widget.populate_mainComboBox() # update observables (delete density gradient from 0d) - + + parent.var["reactor"]["name"] = parent.reactor_select_box.currentText() + parent.plot.observable_widget.populate_mainComboBox() # update observables (delete density gradient from 0d) + # hide/show choices based on selection - if parent.var['reactor']['name'] == 'Incident Shock Reactor': + if parent.var["reactor"]["name"] == "Incident Shock Reactor": parent.zero_d_choice_frame.hide() parent.solver_frame.show() - parent.series.set('zone', 2) - elif '0d Reactor' in parent.var['reactor']['name']: + parent.series.set("zone", 2) + elif "0d Reactor" in parent.var["reactor"]["name"]: parent.zero_d_choice_frame.show() parent.solver_frame.hide() - parent.series.set('zone', 5) - + parent.series.set("zone", 5) + if event is not None: sender = self.sender().objectName() parent.run_single() -class CheckableTabWidget(QTabWidget): # defunct TODO: this would be a good way to select the zone +class CheckableTabWidget( + QTabWidget +): # defunct TODO: this would be a good way to select the zone checkBoxList = [] + def addTab(self, widget, title): QTabWidget.addTab(self, widget, title) checkBox = QCheckBox() self.checkBoxList.append(checkBox) - self.tabBar().setTabButton(self.tabBar().count()-1, QTabBar.LeftSide, checkBox) - self.connect(checkBox, QtCore.SIGNAL('stateChanged(int)'), lambda checkState: self.__emitStateChanged(checkBox, checkState)) + self.tabBar().setTabButton( + self.tabBar().count() - 1, QTabBar.LeftSide, checkBox + ) + self.connect( + checkBox, + QtCore.SIGNAL("stateChanged(int)"), + lambda checkState: self.__emitStateChanged(checkBox, checkState), + ) def isChecked(self, index): - return self.tabBar().tabButton(index, QTabBar.LeftSide).checkState() != QtCore.Qt.Unchecked + return ( + self.tabBar().tabButton(index, QTabBar.LeftSide).checkState() + != QtCore.Qt.Unchecked + ) def setCheckState(self, index, checkState): self.tabBar().tabButton(index, QTabBar.LeftSide).setCheckState(checkState) def __emitStateChanged(self, checkBox, checkState): index = self.checkBoxList.index(checkBox) - self.emit(QtCore.SIGNAL('stateChanged(int, int)'), index, checkState) + self.emit(QtCore.SIGNAL("stateChanged(int, int)"), index, checkState) + - class Mix_Table(QtCore.QObject): def __init__(self, parent): super().__init__(parent) self.table = self.parent().mix_table - stylesheet = ["QHeaderView::section{", # stylesheet because windows 10 doesn't show borders on the bottom + stylesheet = [ + "QHeaderView::section{", # stylesheet because windows 10 doesn't show borders on the bottom "border-top:0px solid #D8D8D8;", "border-left:0px solid #D8D8D8;", "border-right:1px solid #D8D8D8;", "border-bottom:1px solid #D8D8D8;", - # "background-color:white;", # this matches windows 10 theme - "background-color: qlineargradient(x1: 0, y1: 0, x2: 0, y2: 1," # this matches windows 7 theme perfectly - "stop: 0 #ffffff, stop: 1.0 #f1f2f4);" + # "background-color:white;", # this matches windows 10 theme + "background-color: qlineargradient(x1: 0, y1: 0, x2: 0, y2: 1," # this matches windows 7 theme perfectly + "stop: 0 #ffffff, stop: 1.0 #f1f2f4);" "padding:4px;", - "}", - "QTableCornerButton::section{", + "}", + "QTableCornerButton::section{", "border-top:0px solid #D8D8D8;", "border-left:0px solid #D8D8D8;", "border-right:1px solid #D8D8D8;", "border-bottom:1px solid #D8D8D8;", "background-color:white;", - "}"] - - header = self.table.horizontalHeader() - header.setStyleSheet(' '.join(stylesheet)) + "}", + ] + + header = self.table.horizontalHeader() + header.setStyleSheet(" ".join(stylesheet)) header.setSectionResizeMode(0, QtWidgets.QHeaderView.Interactive) header.setSectionResizeMode(1, QtWidgets.QHeaderView.Stretch) header.setSectionResizeMode(2, QtWidgets.QHeaderView.Fixed) - header.resizeSection(2, 60) # Force size of Mol Frac column + header.resizeSection(2, 60) # Force size of Mol Frac column header.setFixedHeight(24) - + self.setItems(species=[], exp_mix=[], alias=[]) self.table.itemChanged.connect(self.update_mix) - + def create_thermo_boxes(self, species=[]): - species.insert(0, '') + species.insert(0, "") self.thermoSpecies_box = [] # create down_arrow_path with forward slashes as required by QT stylesheet url - down_arrow_path = '"' + str((self.parent().path['graphics']/'arrowdown.png').as_posix()) + '"' + down_arrow_path = ( + '"' + + str((self.parent().path["graphics"] / "arrowdown.png").as_posix()) + + '"' + ) for row in range(self.table.rowCount()): self.thermoSpecies_box.append(misc_widget.SearchComboBox()) self.thermoSpecies_box[-1].addItems(species) self.thermoSpecies_box[-1].currentIndexChanged[str].connect(self.update_mix) self.table.setCellWidget(row, 1, self.thermoSpecies_box[-1]) - self.thermoSpecies_box[-1].setNewStyleSheet(down_arrow_path) - + self.thermoSpecies_box[-1].setNewStyleSheet(down_arrow_path) + def create_molFrac_boxes(self, allMolFrac=[]): self.molFrac_box = [] for row in range(self.table.rowCount()): @@ -573,20 +682,22 @@ def create_molFrac_boxes(self, allMolFrac=[]): molFrac = 0 else: molFrac = allMolFrac[row] - - self.molFrac_box.append(misc_widget.ScientificDoubleSpinBox(parent=self.parent(), value=molFrac)) + + self.molFrac_box.append( + misc_widget.ScientificDoubleSpinBox(parent=self.parent(), value=molFrac) + ) self.molFrac_box[-1].setMinimum(0) self.molFrac_box[-1].setMaximum(1) self.molFrac_box[-1].setSingleIntStep(0.001) - self.molFrac_box[-1].setSpecialValueText('-') + self.molFrac_box[-1].setSpecialValueText("-") self.molFrac_box[-1].setFrame(False) self.molFrac_box[-1].valueChanged.connect(self.update_mix) self.table.setCellWidget(row, 2, self.molFrac_box[-1]) - + def update_mix(self, event=None): def isPopStr(str): # is populated string return not not str.strip() - + def isValidRow(table, row): if self.molFrac_box[row].value() == 0: return False @@ -596,19 +707,21 @@ def isValidRow(table, row): return True else: return False - + parent = self.parent() valid_row = [] - for row in range(self.table.rowCount()): + for row in range(self.table.rowCount()): if isValidRow(self.table, row): valid_row.append(row) - - save_species_alias = False # do not save aliases if no original alias and none added - if len(parent.series.current['species_alias']) > 0: + + save_species_alias = ( + False # do not save aliases if no original alias and none added + ) + if len(parent.series.current["species_alias"]) > 0: save_species_alias = True # parent.series.current['species_alias'] = {} # set to empty dict and create from boxes - parent.display_shock['exp_mix'] = {} + parent.display_shock["exp_mix"] = {} for row in valid_row: molFrac = self.molFrac_box[row].value() thermo_name = str(self.thermoSpecies_box[row].currentText()) @@ -617,22 +730,27 @@ def isValidRow(table, row): else: exp_name = self.table.item(row, 0).text() - if thermo_name: # If experimental and thermo name exist update aliases + if thermo_name: # If experimental and thermo name exist update aliases if self.table.item(row, 0) is not None and isPopStr(exp_name): - parent.series.current['species_alias'][exp_name] = thermo_name - elif exp_name in parent.series.current['species_alias']: - del parent.series.current['species_alias'][exp_name] - - parent.display_shock['exp_mix'][exp_name] = molFrac - + parent.series.current["species_alias"][exp_name] = thermo_name + elif exp_name in parent.series.current["species_alias"]: + del parent.series.current["species_alias"][exp_name] + + parent.display_shock["exp_mix"][exp_name] = molFrac + # if path_file exists and species_aliases exist and not loading preset, save aliases - if save_species_alias or len(parent.series.current['species_alias']) > 0: - if parent.path['path_file'].is_file() and not parent.path_set.loading_dir_file: - parent.path_set.save_aliases(parent.path['path_file']) - + if save_species_alias or len(parent.series.current["species_alias"]) > 0: + if ( + parent.path["path_file"].is_file() + and not parent.path_set.loading_dir_file + ): + parent.path_set.save_aliases(parent.path["path_file"]) + parent.series.thermo_mix() - parent.shock_widgets.solve_postshock('T1') # Updates Post-Shock conditions and SIM - + parent.shock_widgets.solve_postshock( + "T1" + ) # Updates Post-Shock conditions and SIM + def setItems(self, species=[], exp_mix=[], alias=[]): self.table.blockSignals(True) self.table.clearContents() @@ -641,7 +759,7 @@ def setItems(self, species=[], exp_mix=[], alias=[]): self.create_molFrac_boxes([]) else: self.create_molFrac_boxes([*exp_mix.values()]) - + for n, (name, molFrac) in enumerate(exp_mix.items()): self.table.setItem(n, 0, QTableWidgetItem(name)) if name in alias: @@ -649,19 +767,22 @@ def setItems(self, species=[], exp_mix=[], alias=[]): box.blockSignals(True) box.setCurrentIndex(box.findText(alias[name])) box.blockSignals(False) - + # self.table.resizeColumnsToContents() self.table.blockSignals(False) - if len(species) > 0 and species != ['']: + if len(species) > 0 and species != [""]: self.update_mix() - - def update_species(self): # may be better to pass variables than call from parent? + + def update_species(self): # may be better to pass variables than call from parent? parent = self.parent() - exp_mix = parent.display_shock['exp_mix'] - species_alias = parent.series.current['species_alias'] - if hasattr(parent.mech.gas, 'species_names'): # if mech exists, set mix table with mech species in thermo box - self.setItems(parent.mech.gas.species_names, - exp_mix = exp_mix, alias=species_alias) + exp_mix = parent.display_shock["exp_mix"] + species_alias = parent.series.current["species_alias"] + if hasattr( + parent.mech.gas, "species_names" + ): # if mech exists, set mix table with mech species in thermo box + self.setItems( + parent.mech.gas.species_names, exp_mix=exp_mix, alias=species_alias + ) else: self.setItems([], exp_mix=exp_mix, alias=species_alias) @@ -671,102 +792,157 @@ def __init__(self, parent): super().__init__(parent) self.table = self.parent().weight_fcn_table - stylesheet = ["QHeaderView::section{", # stylesheet because windows 10 doesn't show borders on the bottom + stylesheet = [ + "QHeaderView::section{", # stylesheet because windows 10 doesn't show borders on the bottom "border-top:0px solid #D8D8D8;", "border-left:0px solid #D8D8D8;", "border-right:1px solid #D8D8D8;", "border-bottom:1px solid #D8D8D8;", - # "background-color:white;", # this matches windows 10 theme - "background-color: qlineargradient(x1: 0, y1: 0, x2: 0, y2: 1," # this matches windows 7 theme perfectly - "stop: 0 #ffffff, stop: 1.0 #f1f2f4);" + # "background-color:white;", # this matches windows 10 theme + "background-color: qlineargradient(x1: 0, y1: 0, x2: 0, y2: 1," # this matches windows 7 theme perfectly + "stop: 0 #ffffff, stop: 1.0 #f1f2f4);" "padding:4px;}", - "QTableCornerButton::section{", + "QTableCornerButton::section{", "border-top:0px solid #D8D8D8;", "border-left:0px solid #D8D8D8;", "border-right:1px solid #D8D8D8;", "border-bottom:1px solid #D8D8D8;", - "background-color:white;}"] - - header = self.table.horizontalHeader() - header.setStyleSheet(' '.join(stylesheet)) + "background-color:white;}", + ] + + header = self.table.horizontalHeader() + header.setStyleSheet(" ".join(stylesheet)) header.setSectionResizeMode(0, QtWidgets.QHeaderView.Stretch) header.setSectionResizeMode(1, QtWidgets.QHeaderView.Stretch) header.setFixedHeight(24) - + self.table.setSpan(0, 0, 1, 2) # make first row span entire length - + self.create_boxes() self.table.itemChanged.connect(self.update) - + def create_boxes(self): parent = self.parent() # self.table.setStyleSheet("QTableWidget::item { margin-left: 10px }") # TODO: Change to saved variables - self.boxes = {'weight_max': [], 'weight_min': [], 'weight_shift': [], 'weight_k': []} - self.prop = {'start': {'weight_max': {'value': 100, 'singleStep': 1, 'maximum': 100, - 'minimum': 0, 'decimals': 3, 'suffix': '%'}, - 'weight_min': {'value': 0, 'singleStep': 1, 'maximum': 100, - 'minimum': 0, 'decimals': 3, 'suffix': '%'}, - 'weight_shift': {'value': 4.5, 'singleStep': 0.1, 'maximum': 100, - 'minimum': 0, 'decimals': 3, 'suffix': '%'}, - 'weight_k': {'value': 0, 'singleStep': 0.01, 'decimals': 3, - 'minimum': 0}}, - 'end': {'weight_min': {'value': 0, 'singleStep': 1, 'maximum': 100, - 'minimum': 0, 'decimals': 3, 'suffix': '%'}, - 'weight_shift': {'value': 36.0, 'singleStep': 0.1, 'maximum': 100, - 'minimum': 0, 'decimals': 3, 'suffix': '%'}, - 'weight_k': {'value': 0.3, 'singleStep': 0.01, 'decimals': 3, - 'minimum': 0}}} - - for j, col in enumerate(['start', 'end']): + self.boxes = { + "weight_max": [], + "weight_min": [], + "weight_shift": [], + "weight_k": [], + } + self.prop = { + "start": { + "weight_max": { + "value": 100, + "singleStep": 1, + "maximum": 100, + "minimum": 0, + "decimals": 3, + "suffix": "%", + }, + "weight_min": { + "value": 0, + "singleStep": 1, + "maximum": 100, + "minimum": 0, + "decimals": 3, + "suffix": "%", + }, + "weight_shift": { + "value": 4.5, + "singleStep": 0.1, + "maximum": 100, + "minimum": 0, + "decimals": 3, + "suffix": "%", + }, + "weight_k": { + "value": 0, + "singleStep": 0.01, + "decimals": 3, + "minimum": 0, + }, + }, + "end": { + "weight_min": { + "value": 0, + "singleStep": 1, + "maximum": 100, + "minimum": 0, + "decimals": 3, + "suffix": "%", + }, + "weight_shift": { + "value": 36.0, + "singleStep": 0.1, + "maximum": 100, + "minimum": 0, + "decimals": 3, + "suffix": "%", + }, + "weight_k": { + "value": 0.3, + "singleStep": 0.01, + "decimals": 3, + "minimum": 0, + }, + }, + } + + for j, col in enumerate(["start", "end"]): for i, row in enumerate(self.prop[col]): - box_val = self.prop[col][row]['value'] - box = misc_widget.ScientificDoubleSpinBox(parent=self.parent(), value=box_val) - - box.setSingleIntStep(self.prop[col][row]['singleStep']) - box.setStrDecimals(self.prop[col][row]['decimals']) - box.setMinimum(self.prop[col][row]['minimum']) - if 'suffix' in self.prop[col][row]: - box.setSuffix(self.prop[col][row]['suffix']) - if 'maximum' in self.prop[col][row]: - box.setMaximum(self.prop[col][row]['maximum']) + box_val = self.prop[col][row]["value"] + box = misc_widget.ScientificDoubleSpinBox( + parent=self.parent(), value=box_val + ) + + box.setSingleIntStep(self.prop[col][row]["singleStep"]) + box.setStrDecimals(self.prop[col][row]["decimals"]) + box.setMinimum(self.prop[col][row]["minimum"]) + if "suffix" in self.prop[col][row]: + box.setSuffix(self.prop[col][row]["suffix"]) + if "maximum" in self.prop[col][row]: + box.setMaximum(self.prop[col][row]["maximum"]) box.setFrame(False) box.info = [col, row] - + box.valueChanged.connect(self.update) - self.table.setCellWidget(i+j, j, box) + self.table.setCellWidget(i + j, j, box) self.boxes[row].append(box) - + def set_boxes(self, shock=None): parent = self.parent() if shock is None: shock = parent.display_shock - - for j, col in enumerate(['start', 'end']): + + for j, col in enumerate(["start", "end"]): for i, row in enumerate(self.prop[col]): box_val = shock[row][j] box = self.boxes[row][j] box.blockSignals(True) box.setValue(box_val) box.blockSignals(False) - + def update(self, event=None, shock=None): parent = self.parent() update_plot = False - if shock is None: # if no shock given, must be from widgets + if shock is None: # if no shock given, must be from widgets shock = parent.display_shock update_plot = True - - shock['weight_max'] = [self.boxes['weight_max'][0].value()] - shock['weight_min'] = [box.value() for box in self.boxes['weight_min']] - shock['weight_shift'] = [box.value() for box in self.boxes['weight_shift']] - shock['weight_k'] = [box.value() for box in self.boxes['weight_k']] - if parent.display_shock['exp_data'].size > 0 and update_plot: # If exp_data exists + shock["weight_max"] = [self.boxes["weight_max"][0].value()] + shock["weight_min"] = [box.value() for box in self.boxes["weight_min"]] + shock["weight_shift"] = [box.value() for box in self.boxes["weight_shift"]] + shock["weight_k"] = [box.value() for box in self.boxes["weight_k"]] + + if ( + parent.display_shock["exp_data"].size > 0 and update_plot + ): # If exp_data exists parent.plot.signal.update(update_lim=False) parent.plot.signal.canvas.draw() - - + + class Uncertainty_Parameters_Table(QtCore.QObject): def __init__(self, parent): super().__init__(parent) @@ -774,496 +950,701 @@ def __init__(self, parent): self.unc_type = parent.unc_type_box.currentText() - stylesheet = ["QHeaderView::section{", # stylesheet because windows 10 doesn't show borders on the bottom + stylesheet = [ + "QHeaderView::section{", # stylesheet because windows 10 doesn't show borders on the bottom "border-top:0px solid #D8D8D8;", "border-left:0px solid #D8D8D8;", "border-right:1px solid #D8D8D8;", "border-bottom:1px solid #D8D8D8;", - # "background-color:white;", # this matches windows 10 theme - "background-color: qlineargradient(x1: 0, y1: 0, x2: 0, y2: 1," # this matches windows 7 theme perfectly - "stop: 0 #ffffff, stop: 1.0 #f1f2f4);" + # "background-color:white;", # this matches windows 10 theme + "background-color: qlineargradient(x1: 0, y1: 0, x2: 0, y2: 1," # this matches windows 7 theme perfectly + "stop: 0 #ffffff, stop: 1.0 #f1f2f4);" "padding:4px;}", - "QTableCornerButton::section{", + "QTableCornerButton::section{", "border-top:0px solid #D8D8D8;", "border-left:0px solid #D8D8D8;", "border-right:1px solid #D8D8D8;", "border-bottom:1px solid #D8D8D8;", - "background-color:white;}"] - - header = self.table.horizontalHeader() - header.setStyleSheet(' '.join(stylesheet)) + "background-color:white;}", + ] + + header = self.table.horizontalHeader() + header.setStyleSheet(" ".join(stylesheet)) header.setSectionResizeMode(0, QtWidgets.QHeaderView.Stretch) header.setSectionResizeMode(1, QtWidgets.QHeaderView.Stretch) header.setFixedHeight(24) - + self.table.setSpan(1, 0, 1, 2) # make first row span entire length - + self.create_boxes() self.table.itemChanged.connect(self.update) parent.unc_type_box.currentTextChanged.connect(self.update) parent.unc_shading_box.currentTextChanged.connect(self.update) parent.wavelet_levels_box.valueChanged.connect(self.update) - + def create_boxes(self): parent = self.parent() # self.table.setStyleSheet("QTableWidget::item { margin-left: 10px }") # TODO: Change to saved variables - self.boxes = {'unc_max': [], 'unc_min': [], 'unc_shift': [], 'unc_k': [], 'unc_cutoff': []} - self.prop = {'start': {'unc_max': {'value': 0, 'singleStep': 1, 'row': 0, - 'minimum': 0, 'decimals': 3, 'suffix': '%'}, - 'unc_min': {'value': 0, 'singleStep': 1, 'row': 1, - 'minimum': 0, 'decimals': 3, 'suffix': '%'}, - 'unc_shift': {'value': 4.5, 'singleStep': 0.1, 'maximum': 100, 'row': 2, - 'minimum': 0, 'decimals': 3, 'suffix': '%'}, - 'unc_k': {'value': 0, 'singleStep': 0.01, 'decimals': 3, 'row': 3, - 'minimum': 0}, - 'unc_cutoff': {'value': 4.5, 'singleStep': 0.1, 'maximum': 100, 'row': 4, - 'minimum': 0, 'decimals': 3, 'suffix': '%'}}, - 'end': {'unc_max': {'value': 0, 'singleStep': 1, 'row': 0, - 'minimum': 0, 'decimals': 3, 'suffix': '%'}, - 'unc_shift': {'value': 36.0, 'singleStep': 0.1, 'maximum': 100, 'row': 2, - 'minimum': 0, 'decimals': 3, 'suffix': '%'}, - 'unc_k': {'value': 0.3, 'singleStep': 0.01, 'decimals': 3, 'row': 3, - 'minimum': 0}, - 'unc_cutoff': {'value': 4.5, 'singleStep': 0.1, 'maximum': 100, 'row': 4, - 'minimum': 0, 'decimals': 3, 'suffix': '%'}}} - - for j, col in enumerate(['start', 'end']): + self.boxes = { + "unc_max": [], + "unc_min": [], + "unc_shift": [], + "unc_k": [], + "unc_cutoff": [], + } + self.prop = { + "start": { + "unc_max": { + "value": 0, + "singleStep": 1, + "row": 0, + "minimum": 0, + "decimals": 3, + "suffix": "%", + }, + "unc_min": { + "value": 0, + "singleStep": 1, + "row": 1, + "minimum": 0, + "decimals": 3, + "suffix": "%", + }, + "unc_shift": { + "value": 4.5, + "singleStep": 0.1, + "maximum": 100, + "row": 2, + "minimum": 0, + "decimals": 3, + "suffix": "%", + }, + "unc_k": { + "value": 0, + "singleStep": 0.01, + "decimals": 3, + "row": 3, + "minimum": 0, + }, + "unc_cutoff": { + "value": 4.5, + "singleStep": 0.1, + "maximum": 100, + "row": 4, + "minimum": 0, + "decimals": 3, + "suffix": "%", + }, + }, + "end": { + "unc_max": { + "value": 0, + "singleStep": 1, + "row": 0, + "minimum": 0, + "decimals": 3, + "suffix": "%", + }, + "unc_shift": { + "value": 36.0, + "singleStep": 0.1, + "maximum": 100, + "row": 2, + "minimum": 0, + "decimals": 3, + "suffix": "%", + }, + "unc_k": { + "value": 0.3, + "singleStep": 0.01, + "decimals": 3, + "row": 3, + "minimum": 0, + }, + "unc_cutoff": { + "value": 4.5, + "singleStep": 0.1, + "maximum": 100, + "row": 4, + "minimum": 0, + "decimals": 3, + "suffix": "%", + }, + }, + } + + for j, col in enumerate(["start", "end"]): for row in self.prop[col]: - i = self.prop[col][row]['row'] - - box_val = self.prop[col][row]['value'] - box = misc_widget.ScientificDoubleSpinBox(parent=self.parent(), value=box_val) - - box.setSingleIntStep(self.prop[col][row]['singleStep']) - box.setStrDecimals(self.prop[col][row]['decimals']) - box.setMinimum(self.prop[col][row]['minimum']) - if 'suffix' in self.prop[col][row]: - box.setSuffix(self.prop[col][row]['suffix']) - if 'maximum' in self.prop[col][row]: - box.setMaximum(self.prop[col][row]['maximum']) + i = self.prop[col][row]["row"] + + box_val = self.prop[col][row]["value"] + box = misc_widget.ScientificDoubleSpinBox( + parent=self.parent(), value=box_val + ) + + box.setSingleIntStep(self.prop[col][row]["singleStep"]) + box.setStrDecimals(self.prop[col][row]["decimals"]) + box.setMinimum(self.prop[col][row]["minimum"]) + if "suffix" in self.prop[col][row]: + box.setSuffix(self.prop[col][row]["suffix"]) + if "maximum" in self.prop[col][row]: + box.setMaximum(self.prop[col][row]["maximum"]) box.setFrame(False) box.info = [col, row] - + box.valueChanged.connect(self.update) self.table.setCellWidget(i, j, box) self.boxes[row].append(box) - + def set_boxes(self, shock=None): parent = self.parent() if shock is None: shock = parent.display_shock - - for j, col in enumerate(['start', 'end']): + + for j, col in enumerate(["start", "end"]): for i, row in enumerate(self.prop[col]): box_val = shock[row][j] box = self.boxes[row][j] box.blockSignals(True) box.setValue(box_val) box.blockSignals(False) - + def update(self, event=None, shock=None): parent = self.parent() sender = self.sender() update_plot = False - if shock is None: # if no shock given, must be from widgets + if shock is None: # if no shock given, must be from widgets shock = parent.display_shock update_plot = True - + if sender is parent.unc_type_box: self.switch_unc_type() if sender in [parent.unc_shading_box, parent.wavelet_levels_box]: parent.plot.signal.unc_shading = parent.unc_shading_box.currentText() parent.plot.signal.wavelet_levels = parent.wavelet_levels_box.value() - if parent.plot.signal.unc_shading != 'Smoothed Signal': + if parent.plot.signal.unc_shading != "Smoothed Signal": parent.wavelet_levels_box.setEnabled(False) else: parent.wavelet_levels_box.setEnabled(True) parent.plot.signal.update_uncertainty_shading() - if sender in self.boxes['unc_cutoff']: - self.boxes['unc_cutoff'][0].setMaximum(self.boxes['unc_cutoff'][1].value()) - self.boxes['unc_cutoff'][1].setMinimum(self.boxes['unc_cutoff'][0].value()) + if sender in self.boxes["unc_cutoff"]: + self.boxes["unc_cutoff"][0].setMaximum(self.boxes["unc_cutoff"][1].value()) + self.boxes["unc_cutoff"][1].setMinimum(self.boxes["unc_cutoff"][0].value()) for param in list(self.boxes.keys()): shock[param] = [box.value() for box in self.boxes[param]] - if parent.display_shock['exp_data'].size > 0 and update_plot: # If exp_data exists + if ( + parent.display_shock["exp_data"].size > 0 and update_plot + ): # If exp_data exists parent.plot.signal.update(update_lim=False) parent.plot.signal.canvas.draw() def switch_unc_type(self): parent = self.parent() shock = parent.display_shock - + # for loading, if a sim hasn't been run - if not hasattr(parent.SIM, 'independent_var') and parent.unc_type_box.currentText() != '%': - self.unc_type = parent.unc_type_box.currentText() - for box in [*self.boxes['unc_max'], *self.boxes['unc_min']]: - box.setSuffix('') - return + if ( + not hasattr(parent.SIM, "independent_var") + and parent.unc_type_box.currentText() != "%" + ): + self.unc_type = parent.unc_type_box.currentText() + for box in [*self.boxes["unc_max"], *self.boxes["unc_min"]]: + box.setSuffix("") + return t = parent.SIM.independent_var sim_obs = parent.SIM.observable old_unc = parent.series.uncertainties(t) - t_conv = parent.var['reactor']['t_unit_conv'] - t0 = shock['exp_data'][ 0, 0] - tf = shock['exp_data'][-1, 0] + t_conv = parent.var["reactor"]["t_unit_conv"] + t0 = shock["exp_data"][0, 0] + tf = shock["exp_data"][-1, 0] - shift = np.array(shock['unc_shift'])/100*(tf-t0) + t0 - k = np.array(shock['unc_k'])*t_conv - unc_min = np.array(shock['unc_min']) - unc_max = np.array(shock['unc_max']) + shift = np.array(shock["unc_shift"]) / 100 * (tf - t0) + t0 + k = np.array(shock["unc_k"]) * t_conv + unc_min = np.array(shock["unc_min"]) + unc_max = np.array(shock["unc_max"]) A = np.insert(unc_max, 1, unc_min) self.unc_type = parent.unc_type_box.currentText() - + # TODO: Switching could use some more work but good enough for now - if self.unc_type == '%': + if self.unc_type == "%": abs_unc = old_unc - x0 = [A[0]/sim_obs[0], A[1]/np.median(sim_obs), A[2]/sim_obs[-1], *k, *shift] - bnds = np.ones((7, 2))*[0, np.inf] + x0 = [ + A[0] / sim_obs[0], + A[1] / np.median(sim_obs), + A[2] / sim_obs[-1], + *k, + *shift, + ] + bnds = np.ones((7, 2)) * [0, np.inf] bnds[3:5, :] = [t0, tf] - zero = lambda x: np.sum((double_sigmoid(t, x[0:3], x[3:5], x[5:7])*sim_obs - abs_unc)**2) + zero = lambda x: np.sum( + (double_sigmoid(t, x[0:3], x[3:5], x[5:7]) * sim_obs - abs_unc) ** 2 + ) res = minimize(zero, x0, bounds=bnds) - new_vals = {'unc_min': [res.x[1]*100], 'unc_max': [res.x[0]*100, res.x[2]*100], - 'unc_k': res.x[3:5]/t_conv, 'unc_shift': (res.x[5:7] - t0)*100/(tf-t0)} + new_vals = { + "unc_min": [res.x[1] * 100], + "unc_max": [res.x[0] * 100, res.x[2] * 100], + "unc_k": res.x[3:5] / t_conv, + "unc_shift": (res.x[5:7] - t0) * 100 / (tf - t0), + } else: - abs_unc = sim_obs*old_unc + abs_unc = sim_obs * old_unc # calculate new absolute uncertainty extents - x0 = [abs_unc[0], A[1]/100*np.median(sim_obs), abs_unc[-1], *k, *shift] - bnds = np.ones((7, 2))*[0, np.inf] + x0 = [abs_unc[0], A[1] / 100 * np.median(sim_obs), abs_unc[-1], *k, *shift] + bnds = np.ones((7, 2)) * [0, np.inf] bnds[3:5, :] = [t0, tf] - zero = lambda x: np.sum((double_sigmoid(t, x[0:3], x[3:5], x[5:7]) - abs_unc)**2) + zero = lambda x: np.sum( + (double_sigmoid(t, x[0:3], x[3:5], x[5:7]) - abs_unc) ** 2 + ) res = minimize(zero, x0, bounds=bnds) - new_vals = {'unc_min': [res.x[1]], 'unc_max': [res.x[0], res.x[2]], - 'unc_k': res.x[3:5]/t_conv, 'unc_shift': (res.x[5:7] - t0)*100/(tf-t0)} + new_vals = { + "unc_min": [res.x[1]], + "unc_max": [res.x[0], res.x[2]], + "unc_k": res.x[3:5] / t_conv, + "unc_shift": (res.x[5:7] - t0) * 100 / (tf - t0), + } - newSingleIntStep = 10**(OoM(np.min(res.x[0:3]))) + newSingleIntStep = 10 ** (OoM(np.min(res.x[0:3]))) - for j, col in enumerate(['start', 'end']): + for j, col in enumerate(["start", "end"]): for row in new_vals.keys(): - if len(self.boxes[row]) <= j: continue + if len(self.boxes[row]) <= j: + continue box = self.boxes[row][j] - if self.unc_type == '%': - box.setSingleIntStep(self.prop[col][row]['singleStep']) - if 'suffix' in self.prop[col][row]: - box.setSuffix(self.prop[col][row]['suffix']) + if self.unc_type == "%": + box.setSingleIntStep(self.prop[col][row]["singleStep"]) + if "suffix" in self.prop[col][row]: + box.setSuffix(self.prop[col][row]["suffix"]) else: - if row in ['unc_min', 'unc_max']: + if row in ["unc_min", "unc_max"]: box.setSingleIntStep(newSingleIntStep) - if 'suffix' in self.prop[col][row]: - box.setSuffix('') - + if "suffix" in self.prop[col][row]: + box.setSuffix("") + box.blockSignals(True) box.setValue(new_vals[row][j]) box.blockSignals(False) - + box.valueChanged.emit(box.value()) - + class Tables_Tab(QtCore.QObject): def __init__(self, parent): super().__init__(parent) parent = self.parent() self.tabwidget = parent.tab_stacked_widget - + # Initialize and Connect Tree Widgets - parent.tree = mech_widget.Tree(parent) # TODO: make dict of trees and types - parent.tree_thermo = thermo_widget.Tree(parent) # TODO: MAKE THIS + parent.tree = mech_widget.Tree(parent) # TODO: make dict of trees and types + parent.tree_thermo = thermo_widget.Tree(parent) # TODO: MAKE THIS parent.series_viewer = series_viewer_widget.Series_Viewer(parent) - + selector = parent.tab_select_comboBox - selector.currentIndexChanged[str].connect(self.select) - + selector.currentIndexChanged[str].connect(self.select) + def select(self, event): parent = self.parent() - if 'Mechanism' in event: - self.tabwidget.setCurrentWidget(self.tabwidget.findChild(QWidget, 'mech_tab')) - if 'Bilbo' in event: - parent.tree.mech_tree_type = 'Bilbo' - elif 'Chemkin' in event: - parent.tree.mech_tree_type = 'Chemkin' - - if parent.mech_loaded: # if mech is loaded successfully, update display type + if "Mechanism" in event: + self.tabwidget.setCurrentWidget( + self.tabwidget.findChild(QWidget, "mech_tab") + ) + if "Bilbo" in event: + parent.tree.mech_tree_type = "Bilbo" + elif "Chemkin" in event: + parent.tree.mech_tree_type = "Chemkin" + + if ( + parent.mech_loaded + ): # if mech is loaded successfully, update display type parent.tree.update_display_type() - elif 'Thermodynamics' in event: - self.tabwidget.setCurrentWidget(self.tabwidget.findChild(QWidget, 'thermo_tab')) - elif 'Series Viewer' in event: - self.tabwidget.setCurrentWidget(self.tabwidget.findChild(QWidget, 'series_viewer_tab')) - - + elif "Thermodynamics" in event: + self.tabwidget.setCurrentWidget( + self.tabwidget.findChild(QWidget, "thermo_tab") + ) + elif "Series Viewer" in event: + self.tabwidget.setCurrentWidget( + self.tabwidget.findChild(QWidget, "series_viewer_tab") + ) + + class Log: def __init__(self, tab_widget, log_box, clear_log_button, copy_log_button): self.tab_widget = tab_widget self.log = log_box - self.log_tab = self.tab_widget.findChild(QWidget, 'log_tab') + self.log_tab = self.tab_widget.findChild(QWidget, "log_tab") self.log_tab_idx = self.tab_widget.indexOf(self.log_tab) - self.color = {'base': self.tab_widget.tabBar().tabTextColor(self.log_tab_idx), - 'gold': QtGui.QColor(255, 191, 0)} - self.current_color = self.color['base'] + self.color = { + "base": self.tab_widget.tabBar().tabTextColor(self.log_tab_idx), + "gold": QtGui.QColor(255, 191, 0), + } + self.current_color = self.color["base"] self.blink_status = False - self.log.setTabStopWidth(int(QtGui.QFontMetricsF(self.log.font()).width(' ')) * 6) - #font = QtGui.QFont("Courier New") - #font.setStyleHint(QtGui.QFont.TypeWriter) - #self.log.setCurrentFont(font) - #self.log.setFontPointSize(9) + self.log.setTabStopWidth( + int(QtGui.QFontMetricsF(self.log.font()).width(" ")) * 6 + ) + # font = QtGui.QFont("Courier New") + # font.setStyleHint(QtGui.QFont.TypeWriter) + # self.log.setCurrentFont(font) + # self.log.setFontPointSize(9) # self.tab_widget.tabBar().setStyleSheet('background-color: yellow') # Connect Log Functions self.tab_widget.currentChanged.connect(self._tab_widget_change) clear_log_button.clicked.connect(self.clear) copy_log_button.clicked.connect(self.copy) - + def append(self, message, alert=True): if isinstance(message, list): - message = '\n'.join(message) - - self.log.append('{}'.format(message)) + message = "\n".join(message) + + self.log.append("{}".format(message)) if alert and self.tab_widget.currentIndex() != self.log_tab_idx: self._blink(True) - + def _tab_widget_change(self, event): if event == self.log_tab_idx: self._blink(False) - + def _blink(self, blink_on): if blink_on: - if not self.blink_status: # if not blinking, set timer and start + if not self.blink_status: # if not blinking, set timer and start self.timer = QtCore.QTimer() self.timer.timeout.connect(lambda: self._blink(True)) self.timer.start(500) - + self.blink_status = True - if self.current_color is self.color['base']: - self.tab_widget.tabBar().setTabTextColor(self.log_tab_idx, self.color['gold']) - self.current_color = self.color['gold'] - elif self.current_color is self.color['gold']: - self.tab_widget.tabBar().setTabTextColor(self.log_tab_idx, self.color['base']) - self.current_color = self.color['base'] + if self.current_color is self.color["base"]: + self.tab_widget.tabBar().setTabTextColor( + self.log_tab_idx, self.color["gold"] + ) + self.current_color = self.color["gold"] + elif self.current_color is self.color["gold"]: + self.tab_widget.tabBar().setTabTextColor( + self.log_tab_idx, self.color["base"] + ) + self.current_color = self.color["base"] elif not blink_on or self.blink_status: self.blink_status = False - if hasattr(self, 'timer'): + if hasattr(self, "timer"): self.timer.stop() - self.tab_widget.tabBar().setTabTextColor(self.log_tab_idx, self.color['base']) - self.current_color = self.color['base'] - + self.tab_widget.tabBar().setTabTextColor( + self.log_tab_idx, self.color["base"] + ) + self.current_color = self.color["base"] + def clear(self, event=None): if event is not None: self._blink(False) self.log.clear() - + def copy(self, event): def fn(self): self.log.selectAll() self.log.copy() - + self.QTextEdit_function(self.log, fn, self) - + def QTextEdit_function(self, object, fn, *args, **kwargs): - signal = object.blockSignals(True) # stop changing text from signaling - old_position = object.textCursor().position() # find old cursor position - cursor = object.textCursor() # create new cursor (I don't know why) - cursor.movePosition(old_position) # move new cursor to old pos + signal = object.blockSignals(True) # stop changing text from signaling + old_position = object.textCursor().position() # find old cursor position + cursor = object.textCursor() # create new cursor (I don't know why) + cursor.movePosition(old_position) # move new cursor to old pos fn(*args, **kwargs) - object.setTextCursor(cursor) # switch current cursor with newly made - object.blockSignals(signal) # allow signals again - - -optAlgorithm = {'DIRECT': nlopt.GN_DIRECT, - 'DIRECT-L': nlopt.GN_DIRECT_L, - 'CRS2 (Controlled Random Search)': nlopt.GN_CRS2_LM, - 'DE (Differential Evolution)': 'pygmo_DE', - 'SaDE (Self-Adaptive DE)': 'pygmo_SaDE', - 'PSO (Particle Swarm Optimization)': 'pygmo_PSO', - 'GWO (Grey Wolf Optimizer)': 'pygmo_GWO', - 'RBFOpt': 'RBFOpt', - 'Nelder-Mead Simplex': nlopt.LN_NELDERMEAD, - 'Subplex': nlopt.LN_SBPLX, - 'COBYLA': nlopt.LN_COBYLA, - 'BOBYQA': nlopt.LN_BOBYQA, - 'IPOPT (Interior Point Optimizer)': 'pygmo_IPOPT'} - -populationAlgorithms = [nlopt.GN_CRS2_LM, nlopt.GN_MLSL_LDS, nlopt.GN_MLSL, nlopt.GN_ISRES] + object.setTextCursor(cursor) # switch current cursor with newly made + object.blockSignals(signal) # allow signals again + + +optAlgorithm = { + "DIRECT": nlopt.GN_DIRECT, + "DIRECT-L": nlopt.GN_DIRECT_L, + "CRS2 (Controlled Random Search)": nlopt.GN_CRS2_LM, + "DE (Differential Evolution)": "pygmo_DE", + "SaDE (Self-Adaptive DE)": "pygmo_SaDE", + "PSO (Particle Swarm Optimization)": "pygmo_PSO", + "GWO (Grey Wolf Optimizer)": "pygmo_GWO", + "RBFOpt": "RBFOpt", + "Nelder-Mead Simplex": nlopt.LN_NELDERMEAD, + "Subplex": nlopt.LN_SBPLX, + "COBYLA": nlopt.LN_COBYLA, + "BOBYQA": nlopt.LN_BOBYQA, + "IPOPT (Interior Point Optimizer)": "pygmo_IPOPT", +} + +populationAlgorithms = [ + nlopt.GN_CRS2_LM, + nlopt.GN_MLSL_LDS, + nlopt.GN_MLSL, + nlopt.GN_ISRES, +] + class Optimization(QtCore.QObject): - def __init__(self, parent): # TODO: Setting tab order needs to happen here + def __init__(self, parent): # TODO: Setting tab order needs to happen here super().__init__(parent) parent = self.parent() - self.settings = {'obj_fcn': {}, 'global': {}, 'local': {}} - + self.settings = {"obj_fcn": {}, "global": {}, "local": {}} + for box in [parent.loss_c_box, parent.bayes_unc_sigma_box]: box.valueChanged.connect(self.update_obj_fcn_settings) - for box in [parent.loss_alpha_box, parent.obj_fcn_type_box, parent.obj_fcn_scale_box, - parent.global_stop_criteria_box, parent.local_opt_choice_box, parent.bayes_dist_type_box]: + for box in [ + parent.loss_alpha_box, + parent.obj_fcn_type_box, + parent.obj_fcn_scale_box, + parent.global_stop_criteria_box, + parent.local_opt_choice_box, + parent.bayes_dist_type_box, + ]: box.currentTextChanged.connect(self.update_obj_fcn_settings) - - self.update_obj_fcn_settings() # initialize settings - + + self.update_obj_fcn_settings() # initialize settings + parent.multiprocessing_box # checkbox - - self.widgets = {'global': {'run': parent.global_opt_enable_box, - 'algorithm': parent.global_opt_choice_box, 'initial_step': [], - 'stop_criteria_type': parent.global_stop_criteria_box, - 'stop_criteria_val': [], 'xtol_rel': [], 'ftol_rel': [], - 'initial_pop_multiplier': []}, - 'local': {'run': parent.local_opt_enable_box, - 'algorithm': parent.local_opt_choice_box, 'initial_step': [], - 'stop_criteria_type': parent.local_stop_criteria_box, - 'stop_criteria_val': [], 'xtol_rel': [], 'ftol_rel': []}} - - self.labels = {'global': [parent.global_text_1, parent.global_text_2, parent.global_text_3], - 'local': [parent.local_text_1, parent.local_text_2, parent.local_text_3]} - + + self.widgets = { + "global": { + "run": parent.global_opt_enable_box, + "algorithm": parent.global_opt_choice_box, + "initial_step": [], + "stop_criteria_type": parent.global_stop_criteria_box, + "stop_criteria_val": [], + "xtol_rel": [], + "ftol_rel": [], + "initial_pop_multiplier": [], + }, + "local": { + "run": parent.local_opt_enable_box, + "algorithm": parent.local_opt_choice_box, + "initial_step": [], + "stop_criteria_type": parent.local_stop_criteria_box, + "stop_criteria_val": [], + "xtol_rel": [], + "ftol_rel": [], + }, + } + + self.labels = { + "global": [ + parent.global_text_1, + parent.global_text_2, + parent.global_text_3, + ], + "local": [parent.local_text_1, parent.local_text_2, parent.local_text_3], + } + self._create_spinboxes() - + for opt_type, boxes in self.widgets.items(): for var_type, box in boxes.items(): - self.widgets[opt_type][var_type].info = {'opt_type': opt_type, 'var': var_type} - - if isinstance(box, QtWidgets.QDoubleSpinBox) or isinstance(box, QtWidgets.QSpinBox): + self.widgets[opt_type][var_type].info = { + "opt_type": opt_type, + "var": var_type, + } + + if isinstance(box, QtWidgets.QDoubleSpinBox) or isinstance( + box, QtWidgets.QSpinBox + ): box.valueChanged.connect(self.update_opt_settings) elif isinstance(box, QtWidgets.QComboBox): box.currentIndexChanged[int].connect(self.update_opt_settings) elif isinstance(box, QtWidgets.QCheckBox): box.stateChanged.connect(self.update_opt_settings) - + self.update_opt_settings() - ''' + """ weight_unc_parameters_stacked_widget WeightFunctionPage weight_fcn_table UncertaintyFunctionPage unc_fcn_table - ''' - + """ + def _create_spinboxes(self): parent = self.parent() - layout = {'global': parent.global_opt_layout, 'local': parent.local_opt_layout} - vars = {'global': {'initial_step': 1E-2, 'stop_criteria_val': 1500, 'xtol_rel': 1E-4, 'ftol_rel': 5E-4, 'initial_pop_multiplier': 1}, - 'local': {'initial_step': 1E-2, 'stop_criteria_val': 1500, 'xtol_rel': 1E-4, 'ftol_rel': 1E-3}} - + layout = {"global": parent.global_opt_layout, "local": parent.local_opt_layout} + vars = { + "global": { + "initial_step": 1e-2, + "stop_criteria_val": 1500, + "xtol_rel": 1e-4, + "ftol_rel": 5e-4, + "initial_pop_multiplier": 1, + }, + "local": { + "initial_step": 1e-2, + "stop_criteria_val": 1500, + "xtol_rel": 1e-4, + "ftol_rel": 1e-3, + }, + } + spinbox = misc_widget.ScientificDoubleSpinBox for opt_type, layout in layout.items(): for n, (var_type, val) in enumerate(vars[opt_type].items()): - if var_type in ['stop_criteria_val', 'initial_pop_multiplier']: - self.widgets[opt_type][var_type] = spinbox(parent=parent, value=val, numFormat='g') + if var_type in ["stop_criteria_val", "initial_pop_multiplier"]: + self.widgets[opt_type][var_type] = spinbox( + parent=parent, value=val, numFormat="g" + ) self.widgets[opt_type][var_type].setMinimum(1) self.widgets[opt_type][var_type].setStrDecimals(4) - if var_type == 'stop_criteria_val': + if var_type == "stop_criteria_val": self.widgets[opt_type][var_type].setSingleIntStep(1) else: self.widgets[opt_type][var_type].setSingleIntStep(0.1) else: - self.widgets[opt_type][var_type] = spinbox(parent=parent, value=val, numFormat='e') + self.widgets[opt_type][var_type] = spinbox( + parent=parent, value=val, numFormat="e" + ) self.widgets[opt_type][var_type].setStrDecimals(1) - + layout.addWidget(self.widgets[opt_type][var_type], n, 0) def update_obj_fcn_settings(self, event=None): parent = self.parent() sender = self.sender() - settings = self.settings['obj_fcn'] - - settings['type'] = parent.obj_fcn_type_box.currentText() - settings['scale'] = parent.obj_fcn_scale_box.currentText() + settings = self.settings["obj_fcn"] + + settings["type"] = parent.obj_fcn_type_box.currentText() + settings["scale"] = parent.obj_fcn_scale_box.currentText() loss_alpha_txt = parent.loss_alpha_box.currentText() - if loss_alpha_txt == 'Adaptive': - settings['alpha'] = 3.0 # since bounds in loss are -inf to 2, this triggers an optimization - elif loss_alpha_txt == 'L2 loss': - settings['alpha'] = 2.0 - elif loss_alpha_txt == 'Huber-like': - settings['alpha'] = 1.0 - elif loss_alpha_txt == 'Cauchy': - settings['alpha'] = 0.0 - elif loss_alpha_txt == 'Geman-McClure': - settings['alpha'] = -2.0 - elif loss_alpha_txt == 'Welsch': - settings['alpha'] = -100.0 - - settings['c'] = 1/parent.loss_c_box.value() # this makes increasing values decrease outlier influence - - settings['bayes_dist_type'] = parent.bayes_dist_type_box.currentText() - settings['bayes_unc_sigma'] = parent.bayes_unc_sigma_box.value() + if loss_alpha_txt == "Adaptive": + settings[ + "alpha" + ] = 3.0 # since bounds in loss are -inf to 2, this triggers an optimization + elif loss_alpha_txt == "L2 loss": + settings["alpha"] = 2.0 + elif loss_alpha_txt == "Huber-like": + settings["alpha"] = 1.0 + elif loss_alpha_txt == "Cauchy": + settings["alpha"] = 0.0 + elif loss_alpha_txt == "Geman-McClure": + settings["alpha"] = -2.0 + elif loss_alpha_txt == "Welsch": + settings["alpha"] = -100.0 + + settings["c"] = ( + 1 / parent.loss_c_box.value() + ) # this makes increasing values decrease outlier influence + + settings["bayes_dist_type"] = parent.bayes_dist_type_box.currentText() + settings["bayes_unc_sigma"] = parent.bayes_unc_sigma_box.value() # Hides and unhides the Bayesian page depending upon selection. Is this better than disabling though? if sender is parent.obj_fcn_type_box or event is None: - parent.plot.signal.switch_weight_unc_plot() # update weight/uncertainty plot + parent.plot.signal.switch_weight_unc_plot() # update weight/uncertainty plot stackWidget = parent.weight_unc_parameters_stacked_widget - if settings['type'] == 'Residual': - parent.obj_fcn_tab_widget.removeTab(parent.obj_fcn_tab_widget.indexOf(parent.Bayesian_tab)) + if settings["type"] == "Residual": + parent.obj_fcn_tab_widget.removeTab( + parent.obj_fcn_tab_widget.indexOf(parent.Bayesian_tab) + ) stackWidget.setCurrentWidget(parent.WeightFunctionPage) else: - parent.obj_fcn_tab_widget.insertTab(parent.obj_fcn_tab_widget.count() + 1, parent.Bayesian_tab, "Bayesian") + parent.obj_fcn_tab_widget.insertTab( + parent.obj_fcn_tab_widget.count() + 1, + parent.Bayesian_tab, + "Bayesian", + ) stackWidget.setCurrentWidget(parent.UncertaintyFunctionPage) self.save_settings(event) - + def update_opt_settings(self, event=None): parent = self.parent() sender = self.sender() if event is not None: box = sender - opt_type = box.info['opt_type'] - var_type = box.info['var'] - - if var_type == 'run': - self.settings[opt_type]['run'] = box.isChecked() - for box in list(self.widgets[opt_type].values()) + self.labels[opt_type]: + opt_type = box.info["opt_type"] + var_type = box.info["var"] + + if var_type == "run": + self.settings[opt_type]["run"] = box.isChecked() + for box in ( + list(self.widgets[opt_type].values()) + self.labels[opt_type] + ): if box is not self.sender(): - box.setEnabled(self.settings[opt_type]['run']) + box.setEnabled(self.settings[opt_type]["run"]) return - - elif var_type == 'algorithm': - if opt_type == 'global': - if box.currentText() == 'MLSL (Multi-Level Single-Linkage)': - self.widgets['local']['run'].setEnabled(False) - self.widgets['local']['run'].setChecked(True) - else: - self.widgets['local']['run'].setEnabled(True) - + + elif var_type == "algorithm": + if opt_type == "global": + if box.currentText() == "MLSL (Multi-Level Single-Linkage)": + self.widgets["local"]["run"].setEnabled(False) + self.widgets["local"]["run"].setChecked(True) + else: + self.widgets["local"]["run"].setEnabled(True) + for opt_type, boxes in self.widgets.items(): for var_type, box in self.widgets[opt_type].items(): - if isinstance(box, QtWidgets.QDoubleSpinBox) or isinstance(box, QtWidgets.QSpinBox): + if isinstance(box, QtWidgets.QDoubleSpinBox) or isinstance( + box, QtWidgets.QSpinBox + ): self.settings[opt_type][var_type] = box.value() elif isinstance(box, QtWidgets.QComboBox): - if box in [parent.global_opt_choice_box, parent.local_opt_choice_box]: - self.settings[opt_type][var_type] = optAlgorithm[box.currentText()] - if sender is box and box is parent.global_opt_choice_box: # Toggle pop_multiplier box - if self.settings[opt_type][var_type] in populationAlgorithms: - self.widgets[opt_type]['initial_pop_multiplier'].setEnabled(True) + if box in [ + parent.global_opt_choice_box, + parent.local_opt_choice_box, + ]: + self.settings[opt_type][var_type] = optAlgorithm[ + box.currentText() + ] + if ( + sender is box and box is parent.global_opt_choice_box + ): # Toggle pop_multiplier box + if ( + self.settings[opt_type][var_type] + in populationAlgorithms + ): + self.widgets[opt_type][ + "initial_pop_multiplier" + ].setEnabled(True) else: - self.widgets[opt_type]['initial_pop_multiplier'].setEnabled(False) + self.widgets[opt_type][ + "initial_pop_multiplier" + ].setEnabled(False) else: self.settings[opt_type][var_type] = box.currentText() - if sender is box and box is self.widgets[opt_type]['stop_criteria_type']: - if self.settings[opt_type][var_type] == 'No Abort Criteria': - self.widgets[opt_type]['stop_criteria_val'].setEnabled(False) + if ( + sender is box + and box is self.widgets[opt_type]["stop_criteria_type"] + ): + if self.settings[opt_type][var_type] == "No Abort Criteria": + self.widgets[opt_type]["stop_criteria_val"].setEnabled( + False + ) else: - self.widgets[opt_type]['stop_criteria_val'].setEnabled(True) + self.widgets[opt_type]["stop_criteria_val"].setEnabled( + True + ) elif isinstance(box, QtWidgets.QCheckBox): self.settings[opt_type][var_type] = box.isChecked() self.save_settings(event) - + def save_settings(self, event=None): - if event is None: return - if not hasattr(self.parent(), 'user_settings'): return - if 'path_file' not in self.parent().path: return + if event is None: + return + if not hasattr(self.parent(), "user_settings"): + return + if "path_file" not in self.parent().path: + return self.parent().user_settings.save() def get(self, opt_type, var_type): return self.settings[opt_type][var_type] - diff --git a/src/plot/base_plot.py b/src/plot/base_plot.py index 57f01cd..dd6cf12 100644 --- a/src/plot/base_plot.py +++ b/src/plot/base_plot.py @@ -175,10 +175,12 @@ def update_xylim(self, axes, xlim=[], ylim=[], force_redraw=True): for (axis, lim) in zip(['x', 'y'], [xlim, ylim]): # Set Limits - if len(lim) == 0: - eval('self.set_' + axis + 'lim(axes, data["' + axis + '"])') + if len(lim) < 2: + eval(f'self.set_{axis}lim(axes, data["{axis}"])') + elif lim[0] == lim[1]: + pass else: - eval('axes.set_' + axis + 'lim(lim)') + eval(f'axes.set_{axis}lim(lim)') # If bisymlog, also update scaling, C if eval('axes.get_' + axis + 'scale()') == 'bisymlog': diff --git a/src/plot/signal_plot.py b/src/plot/signal_plot.py index f543c12..80611bf 100644 --- a/src/plot/signal_plot.py +++ b/src/plot/signal_plot.py @@ -539,8 +539,8 @@ def switch_weight_unc_plot(self): obj_fcn_type = parent.obj_fcn_type_box.currentText() if obj_fcn_type == 'Residual': self.ax[0].item['title'].set_text('Weighting') # set title - self.update_xylim(self.ax[0], xlim=self.ax[0].get_xlim(), ylim=[-0.1, 1.1], force_redraw=False) - for i in range(0,2): + self.update_xylim(self.ax[0], xlim=self.ax[0].get_xlim(), ylim=[-0.1, 1.1], force_redraw=False) + for i in range(0, 2): self.ax[1].item['cutoff_line'][i].set_xdata([np.nan]) else: self.ax[0].item['title'].set_text('Uncertainty') # set title diff --git a/src/save_output.py b/src/save_output.py index 9f30668..7a0dc26 100644 --- a/src/save_output.py +++ b/src/save_output.py @@ -4,8 +4,9 @@ import numpy as np from tabulate import tabulate -import soln2ck import pathlib + +from cantera.yaml2ck import convert as soln2ck # from cantera import ck2cti # Maybe later use ck2cti.Parser.writeCTI to write cti file class Save: @@ -178,4 +179,4 @@ def chemkin_format(self, gas=[], path=[]): if not path: path = self.path['Mech.ck'] - soln2ck.write(gas, path, self.path['Cantera_Mech']) + soln2ck(gas, mechanism_path=path, sort_species="molar-mass", overwrite=True) \ No newline at end of file diff --git a/src/settings.py b/src/settings.py index 471727c..15ae567 100644 --- a/src/settings.py +++ b/src/settings.py @@ -720,7 +720,7 @@ def add_series(self): # need to think about what to do when mech ch parent.path['shock'] = parent.path_set.shock_paths(prefix='Shock', ext='exp') if len(parent.path['shock']) == 0: # if there are no shocks in listed directory - parent.directory.update_icons(invalid = 'exp_main') + parent.directory.update_icons(invalid=['exp_main']) return if self.in_table and not self.in_table[-1]: # if list exists and last item not in table, clear it @@ -1029,6 +1029,8 @@ def rate_bnds(self, shock): if self.parent.mech_tree.rxn[rxnIdx]['rxnType'] in ['Arrhenius', 'Plog Reaction', 'Falloff Reaction']: resetVal = mech.gas.forward_rate_constants[rxnIdx] shock['rate_reset_val'].append(resetVal) + if 'limits' not in mech.rate_bnds[rxnIdx]: + print(rxnIdx) rate_bnds = mech.rate_bnds[rxnIdx]['limits'](resetVal) shock['rate_bnds'].append(rate_bnds) diff --git a/src/soln2ck.py b/src/soln2ck.py deleted file mode 100644 index d0697b7..0000000 --- a/src/soln2ck.py +++ /dev/null @@ -1,654 +0,0 @@ -# This file is part of Frhodo. Copyright © 2020, UChicago Argonne, LLC -# and licensed under BSD-3-Clause. See License.txt in the top-level -# directory for license and copyright information. - -''' -Adapted from Kyle Niemeyer's pyMARS Jul 24, 2019 - -Writes a solution object to a chemkin inp file -currently only works for Elementary, Falloff and ThreeBody Reactions -Cantera version 2.5 required - -KE Niemeyer, CJ Sung, and MP Raju. Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis. Combust. Flame, 157(9):1760--1770, 2010. doi:10.1016/j.combustflflame.2009.12.022 -KE Niemeyer and CJ Sung. On the importance of graph search algorithms for DRGEP-based mechanism reduction methods. Combust. Flame, 158(8):1439--1443, 2011. doi:10.1016/j.combustflflame.2010.12.010. -KE Niemeyer and CJ Sung. Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels. Combust. Flame, in press, 2014. doi:10.1016/j.combustflame.2014.05.001 -TF Lu and CK Law. Combustion and Flame, 154:153--163, 2008. doi:10.1016/j.combustflame.2007.11.013 - -''' - -import os, pathlib, re -from textwrap import fill -from collections import Counter - -import cantera as ct - -try: - import ruamel_yaml as yaml -except ImportError: - from ruamel import yaml - -# number of calories in 1000 Joules -CALORIES_CONSTANT = 4184.0 - -# Conversion from 1 debye to coulomb-meters -DEBEYE_CONVERSION = 3.33564e-30 - -def reorder_reaction_equation(solution, reaction): - # Split Reaction Equation - rxn_eqn = reaction.equation - for reaction_direction in ['<=>', '<=', '=>']: - if f' {reaction_direction} ' in rxn_eqn: - break - - regex_str = fr'{reaction_direction}|\+|\(\+M\)\s*(?![^()]*\))' - items_list = re.split(regex_str, rxn_eqn.replace(' ', '')) - items_list = ['(+M)' if not item else item for item in items_list] - for third_body in ['(+M)', 'M', '']: # search rxn for third body - if third_body in items_list: # if reaches '', doesn't exist - if third_body == '(+M)': - third_body = ' (+M)' - elif third_body == 'M': - third_body = ' + M' - break - - # Sort and apply to reaction equation - reaction_txt = [] - reaction_split = {'reactants': reaction.reactants, - 'products': reaction.products} - for n, (reaction_side, species) in enumerate(reaction_split.items()): - species_weights = [] - for key in species.keys(): - index = solution.species_index(key) - species_weights.append(solution.molecular_weights[index]) - - # Append coefficient to species - species_list = [] - for species_text, coef in species.items(): - if coef == 1.0: - species_list.append(species_text) - elif coef.is_integer(): - species_list.append(f'{coef:.0f} {species_text}') - else: - species_list.append(f'{coef:f}'.rstrip("0").rstrip(".") + f' {species_text}') - - species = species_list - - # Reorder species based on molecular weights - species = [x for y, x in sorted(zip(species_weights, species))][::-1] - reaction_txt.append(' + '.join(species) + third_body) - - reaction_txt = f' {reaction_direction} '.join(reaction_txt) - - return reaction_txt - - -def match_reaction(solution, yaml_rxn): - yaml_rxn = {'eqn': yaml_rxn} - - for reaction_direction in [' <=> ', ' <= ', ' => ']: - if reaction_direction in yaml_rxn['eqn']: - break - for third_body in [' (+M)', ' + M', '']: # search eqn for third body - if third_body in yaml_rxn['eqn']: # if reaches '', doesn't exist - break - - yaml_rxn_split = yaml_rxn['eqn'].split(reaction_direction) - for i, side in zip([0, 1], ['reac', 'prod']): - yaml_rxn[side] = {} - species = yaml_rxn_split[i].replace(third_body, '').split(' + ') - yaml_rxn[side].update(Counter(species)) - - for rxn in solution.reactions(): - if (rxn.reactants == yaml_rxn['reac'] and - rxn.products == yaml_rxn['prod'] and - third_body in str(rxn)): - - return str(rxn) # return rxn if match - - return yaml_rxn['eqn'] # returns yaml_str if no match - - -def get_notes(path=None, solution=None): - """Get notes by parsing input mechanism in yaml format - Parameters - ---------- - path : path or str, optional - Path of yaml file used as input in order to parse for notes - solution : - """ - - note = {'header': [], 'species_thermo': {}, 'species': {}, 'reaction': {}} - - if path is None: return note - - with open(path, 'r') as yaml_file: - data = yaml.load(yaml_file, yaml.RoundTripLoader) - - # Header note - if 'description' in data: - note['header'] = data['description'] - else: - note['header'] = '' - - # Species and thermo_species notes - for species in data['species']: - if 'note' in species: - note['species'][species['name']] = species['note'] - else: - note['species'][species['name']] = '' - - if 'note' in species['thermo']: - note['species_thermo'][species['name']] = species['thermo']['note'] - else: - note['species_thermo'][species['name']] = '' - - if 'reactions' in data: - for rxn in data['reactions']: - ct_rxn_eqn = match_reaction(solution, rxn['equation']) - if 'note' in rxn: - note['reaction'][ct_rxn_eqn] = '! ' + rxn['note'].replace('\n', '\n! ') - else: - note['reaction'][ct_rxn_eqn] = '' - - return note - - -def eformat(f, precision=7, exp_digits=3): - s = f"{f: .{precision}e}" - if s == ' inf' or s == '-inf': - return s - else: - mantissa, exp = s.split('e') - exp_digits += 1 # +1 due to sign - return f"{mantissa}E{int(exp):+0{exp_digits}}" - - -def build_arrhenius(rate, reaction_order, reaction_type): - """Builds Arrhenius coefficient string based on reaction type. - Parameters - ---------- - rate : cantera.Arrhenius - Arrhenius-form reaction rate coefficient - reaction_order : int or float - Order of reaction (sum of reactant stoichiometric coefficients) - reaction_type : {cantera.ElementaryReaction, cantera.ThreeBodyReaction, cantera.PlogReaction} - Type of reaction - Returns - ------- - str - String with Arrhenius coefficients - """ - if reaction_type in [ct.ElementaryReaction, ct.PlogReaction]: - pre_exponential_factor = rate.pre_exponential_factor * 1e3**(reaction_order - 1) - - elif reaction_type == ct.ThreeBodyReaction: - pre_exponential_factor = rate.pre_exponential_factor * 1e3**reaction_order - - elif reaction_type in [ct.FalloffReaction, ct.ChemicallyActivatedReaction]: - raise ValueError('Function does not support falloff or chemically activated reactions') - else: - raise NotImplementedError('Reaction type not supported: ', reaction_type) - - activation_energy = rate.activation_energy / CALORIES_CONSTANT - arrhenius = [f'{eformat(pre_exponential_factor)}', - f'{eformat(rate.temperature_exponent)}', - f'{eformat(activation_energy)}'] - return ' '.join(arrhenius) - - -def build_falloff_arrhenius(rate, reaction_order, reaction_type, pressure_limit): - """Builds Arrhenius coefficient strings for falloff and chemically-activated reactions. - Parameters - ---------- - rate : cantera.Arrhenius - Arrhenius-form reaction rate coefficient - reaction_order : int or float - Order of reaction (sum of reactant stoichiometric coefficients) - reaction_type : {ct.FalloffReaction, ct.ChemicallyActivatedReaction} - Type of reaction - pressure_limit : {'high', 'low'} - string designating pressure limit - - Returns - ------- - str - Arrhenius coefficient string - """ - assert pressure_limit in ['low', 'high'], 'Pressure range needs to be high or low' - - # Each needs more complicated handling due if high- or low-pressure limit - if reaction_type == ct.FalloffReaction: - if pressure_limit == 'low': - pre_exponential_factor = rate.pre_exponential_factor * 1e3**(reaction_order) - elif pressure_limit == 'high': - pre_exponential_factor = rate.pre_exponential_factor * 1e3**(reaction_order - 1) - - elif reaction_type == ct.ChemicallyActivatedReaction: - if pressure_limit == 'low': - pre_exponential_factor = rate.pre_exponential_factor * 1e3**(reaction_order - 1) - elif pressure_limit == 'high': - pre_exponential_factor = rate.pre_exponential_factor * 1e3**(reaction_order - 2) - else: - raise ValueError('Reaction type not supported: ', reaction_type) - - activation_energy = rate.activation_energy / CALORIES_CONSTANT - arrhenius = [f'{eformat(pre_exponential_factor)}', - f'{eformat(rate.temperature_exponent)}', - f'{eformat(activation_energy)}' - ] - return ' '.join(arrhenius) - - -def build_falloff(parameters, falloff_function): - """Creates falloff reaction Troe parameter string - Parameters - ---------- - parameters : numpy.ndarray - Array of falloff parameters; length varies based on ``falloff_function`` - falloff_function : {'Troe', 'SRI'} - Type of falloff function - Returns - ------- - falloff_string : str - String of falloff parameters - """ - if falloff_function == 'Troe': - if parameters[-1] == 0.0: - falloff = [f'{eformat(f)}'for f in parameters[:-1]] - falloff.append(' '*15) - else: - falloff = [f'{eformat(f)}'for f in parameters] - - falloff_string = f"TROE / {' '.join(falloff)} /\n" - elif falloff_function == 'SRI': - falloff = [f'{eformat(f)}'for f in parameters] - falloff_string = f"SRI / {' '.join(falloff)} /\n" - else: - raise NotImplementedError(f'Falloff function not supported: {falloff_function}') - - return falloff_string - - -def species_data_text(species_list, note): - max_species_len = max([len(s) for s in species_list]) - if note: - max_species_len = max([16, max_species_len]) - species_txt = [] - for species in species_list: - text = f'{species:<{max_species_len}} ! {note[species]}\n' - species_txt.append(text) - - species_txt = ''.join(species_txt) - - else: - species_names = [f"{s:<{max_species_len}}" for s in species_list] - species_names = fill( - ' '.join(species_names), - width=72, # max length is 16, this gives 4 species per line - break_long_words=False, - break_on_hyphens=False - ) - - species_txt = f'{species_names}\n' - - text = ('SPECIES\n' + - species_txt + - 'END\n\n\n') - - return text - - -def thermo_data_text(species_list, note, input_type='included'): - """Returns thermodynamic data in Chemkin-format file. - Parameters - ---------- - species_list : list of cantera.Species - List of species objects - input_type : str, optional - 'included' if thermo will be printed in mech file, 'file' otherwise - """ - - if input_type == 'included': - thermo_text = ['THERMO ALL\n' + - ' 300.000 1000.000 6000.000\n'] - else: - thermo_text = ['THERMO\n' + - ' 300.000 1000.000 6000.000\n'] - - # write data for each species in the Solution object - for species in species_list: - composition_string = ''.join([f'{s:2}{int(v):>3}' - for s, v in species.composition.items() - ]) - - # first line has species name, space for notes/date, elemental composition, - # phase, thermodynamic range temperatures (low, high, middle), and a "1" - # total length should be 80 - - # attempt to split note and comment - if not note: - comment, comment_str, note_str = '', '', '' - elif len(note[species.name].split('\n', 1)) == 1: - comment = '' - comment_str = '' - note_str = note[species.name] - else: - comment = '!\n' - note_str, comment_str = note[species.name].split('\n', 1) - - if len(f'{species.name} {note_str}') > 24: - comment_str += '\n' + note_str - note_str = '' - - comment_str = comment_str.replace('\n', '\n! ') - comment = f'{comment}! {comment_str}' - - name_and_note = f'{species.name} {note_str}' - species_string = (comment + '\n' + - f'{name_and_note:<24}' + # name and date/note field - f'{composition_string:<20}' + - 'G' + # only supports gas phase - f'{species.thermo.min_temp:10.3f}' + - f'{species.thermo.max_temp:10.3f}' + - f'{species.thermo.coeffs[0]:8.2f}' + - 6*' ' + # unused atomic symbols/formula, and blank space - '1\n' - ) - - # second line has first five coefficients of high-temperature range, - # ending with a "2" in column 79 - species_string += ( - ''.join([f'{c:15.8e}' for c in species.thermo.coeffs[1:6]]) + - ' ' + - '2\n' - ) - - # third line has the last two coefficients of the high-temperature range, - # first three coefficients of low-temperature range, and "3" - species_string += ( - ''.join([f'{c:15.8e}' for c in species.thermo.coeffs[6:8]]) + - ''.join([f'{c:15.8e}' for c in species.thermo.coeffs[8:11]]) + - ' ' + - '3\n' - ) - - # fourth and last line has the last four coefficients of the - # low-temperature range, and "4" - - species_string += ( - ''.join([f'{c:15.8e}' for c in species.thermo.coeffs[11:15]]) + - 19*' ' + - '4\n' - ) - - thermo_text.append(species_string) - - if input_type == 'included': - thermo_text.append('END\n\n\n') - else: - thermo_text.append('END\n') - - return ''.join(thermo_text) - - -def write_transport_data(species_list, filename='generated_transport.dat'): - """Writes transport data to Chemkin-format file. - Parameters - ---------- - species_list : list of cantera.Species - List of species objects - filename : path or str, optional - Filename for new Chemkin transport database file - """ - geometry = {'atom': '0', 'linear': '1', 'nonlinear': '2'} - with open(filename, 'w') as trans_file: - - # write data for each species in the Solution object - for species in species_list: - - # each line contains the species name, integer representing - # geometry, Lennard-Jones potential well depth in K, - # Lennard-Jones collision diameter in angstroms, - # dipole moment in Debye, - # polarizability in cubic angstroms, and - # rotational relaxation collision number at 298 K. - species_string = ( - f'{species.name:<16}' + - f'{geometry[species.transport.geometry]:>4}' + - f'{(species.transport.well_depth / ct.boltzmann):>10.3f}' + - f'{(species.transport.diameter * 1e10):>10.3f}' + - f'{(species.transport.dipole / DEBEYE_CONVERSION):>10.3f}' + - f'{(species.transport.polarizability * 1e30):>10.3f}' + - f'{species.transport.rotational_relaxation:>10.3f}' + - '\n' - ) - - trans_file.write(species_string) - - -def write(solution, output_path='', input_yaml='', - skip_thermo=False, same_file_thermo=True, - skip_transport=False): - """Writes Cantera solution object to Chemkin-format file. - Parameters - ---------- - solution : cantera.Solution - Model to be written - output_path : path or str, optional - Path of file to be written; if not provided, use cd / 'solution.name' - input_yaml : path or str, optional - Path of yaml file used as input in order to parse for notes - skip_thermo : bool, optional - Flag to skip writing thermo data - same_file_thermo : bool, optional - Flag to write thermo data in the mechanism file - skip_transport : bool, optional - Flag to skip writing transport data in separate file - Returns - ------- - output_file_name : str - Name of output model file (.ck) - Examples - -------- - >>> gas = cantera.Solution('gri30.cti') - >>> soln2ck.write(gas) - reduced_gri30.ck - """ - if output_path: - if not isinstance(output_path, pathlib.PurePath): - output_path = pathlib.Path(output_path) - else: - main_path = pathlib.Path.cwd() - output_path = main_path / f'{solution.name}.ck' - - if output_path.is_file(): - output_path.unlink() - - main_path = output_path.parents[0] - basename = output_path.stem - output_files = [output_path] - - if input_yaml: - if not isinstance(input_yaml, pathlib.PurePath): - input_yaml = pathlib.Path(input_yaml) - - note = get_notes(input_yaml, solution) - else: - note = get_notes() - - with open(output_path, 'w') as mech_file: - # Write title block to file - if note['header']: - note["header"] = note['header'].replace('\n', '\n! ') - mech_file.write(f'! {note["header"]}\n! \n') - mech_file.write('! Chemkin file converted from Cantera solution object\n! \n\n') - - # write species and element lists to file - element_names = ' '.join(solution.element_names) - mech_file.write( - 'ELEMENTS\n' + - f'{element_names}\n' + - 'END\n\n\n' - ) - - mech_file.write(species_data_text(solution.species_names, note['species'])) - - # Write thermo to file - if not skip_thermo and same_file_thermo: - mech_file.write(thermo_data_text(solution.species(), note['species_thermo'], - input_type='included')) - - # Write reactions to file - max_rxn_width = 3 + max([len(rxn.equation) for rxn in solution.reactions()] + [48]) - - mech_file.write('REACTIONS CAL/MOLE MOLES\n') - # Write data for each reaction in the Solution Object - for n, reaction in enumerate(solution.reactions()): - reaction_equation = str(reaction) - - reaction_string = '' - if reaction_equation in note['reaction']: - rxn_note = note['reaction'][reaction_equation] - rxn_note = rxn_note.rsplit('\n! ', 1) - if len(rxn_note) > 1: - reaction_string = f'{rxn_note[0]}\n' - after_eqn_text = rxn_note[-1].strip() - rxn_note[-1] = f'! {after_eqn_text}' - else: - rxn_note = [''] - - reaction_equation = reorder_reaction_equation(solution, reaction) - reaction_string += f'{reaction_equation:<{max_rxn_width}}' - - # The Arrhenius parameters that follow the equation string on the main line - # depend on the type of reaction. - if type(reaction) in [ct.ElementaryReaction, ct.ThreeBodyReaction]: - arrhenius = build_arrhenius( - reaction.rate, - sum(reaction.reactants.values()), - type(reaction) - ) - - elif type(reaction) == ct.FalloffReaction: - # high-pressure limit is included on the main reaction line - arrhenius = build_falloff_arrhenius( - reaction.high_rate, - sum(reaction.reactants.values()), - ct.FalloffReaction, - 'high' - ) - - elif type(reaction) == ct.ChemicallyActivatedReaction: - # low-pressure limit is included on the main reaction line - arrhenius = build_falloff_arrhenius( - reaction.low_rate, - sum(reaction.reactants.values()), - ct.ChemicallyActivatedReaction, - 'low' - ) - - elif type(reaction) == ct.ChebyshevReaction: - arrhenius = '1.0e0 0.0 0.0' - - elif type(reaction) == ct.PlogReaction: - arrhenius = build_arrhenius( - reaction.rates[0][1], - sum(reaction.reactants.values()), - ct.PlogReaction - ) - else: - raise NotImplementedError(f'Unsupported reaction type: {type(reaction)}') - - reaction_string += f'{arrhenius} {rxn_note[-1]}\n' - - # now write any auxiliary information for the reaction - if type(reaction) == ct.FalloffReaction: - # for falloff reaction, need to write low-pressure limit Arrhenius expression - arrhenius = build_falloff_arrhenius( - reaction.low_rate, - sum(reaction.reactants.values()), - ct.FalloffReaction, - 'low' - ) - reaction_string += f'{"LOW / ".rjust(max_rxn_width)}{arrhenius} /\n' - - # need to print additional falloff parameters if present - if reaction.falloff.parameters.size > 0: - falloff_str = build_falloff(reaction.falloff.parameters, reaction.falloff.type) - width = max_rxn_width - 10 - 15*(reaction.falloff.parameters.size - 3) - reaction_string += f'{"".ljust(width)}{falloff_str}' - - elif type(reaction) == ct.ChemicallyActivatedReaction: - # for chemically activated reaction, need to write high-pressure expression - arrhenius = build_falloff_arrhenius( - reaction.low_rate, - sum(reaction.reactants.values()), - ct.ChemicallyActivatedReaction, - 'high' - ) - reaction_string += f'HIGH' - reaction_string += f'{"HIGH / ".rjust(max_rxn_width)}{arrhenius} /\n' - - # need to print additional falloff parameters if present - if reaction.falloff.parameters.size > 0: - falloff_str = build_falloff(reaction.falloff.parameters, reaction.falloff.type) - width = max_rxn_width - 10 - 15*(reaction.falloff.parameters.size - 3) - reaction_string += f'{"".ljust(width)}{falloff_str}' - - elif type(reaction) == ct.PlogReaction: - # just need one rate per line - for rate in reaction.rates: - pressure = f'{eformat(rate[0] / ct.one_atm)}' - arrhenius = build_arrhenius(rate[1], - sum(reaction.reactants.values()), - ct.PlogReaction - ) - reaction_string += (f'{"PLOG / ".rjust(max_rxn_width-18)}' - f'{pressure} {arrhenius} /\n') - - elif type(reaction) == ct.ChebyshevReaction: - reaction_string += ( - f'TCHEB / {reaction.Tmin} {reaction.Tmax} /\n' + - f'PCHEB / {reaction.Pmin / ct.one_atm} {reaction.Pmax / ct.one_atm} /\n' + - f'CHEB / {reaction.nTemperature} {reaction.nPressure} /\n' - ) - for coeffs in reaction.coeffs: - coeffs_row = ' '.join([f'{c:.6e}' for c in coeffs]) - reaction_string += f'CHEB / {coeffs_row} /\n' - - # need to trim and print third-body efficiencies, if present - if type(reaction) in [ct.ThreeBodyReaction, ct.FalloffReaction, - ct.ChemicallyActivatedReaction - ]: - # trims efficiencies list - reduced_efficiencies = {s:reaction.efficiencies[s] - for s in reaction.efficiencies - if s in solution.species_names - } - efficiencies_str = ' '.join([f'{s}/ {v:.3f}/' for s, v in reduced_efficiencies.items()]) - if efficiencies_str: - reaction_string += ' ' + efficiencies_str + '\n' - - if reaction.duplicate: - reaction_string += ' DUPLICATE\n' - - mech_file.write(reaction_string) - - mech_file.write('END') - - # write thermo data - if not skip_thermo and not same_file_thermo: - therm_path = main_path / f'{basename}.therm' - with open(therm_path, 'w') as thermo_file: - thermo_file.write(thermo_data_text(solution.species(), input_type='file')) - output_files.append(therm_path) - - # TODO: more careful check for presence of transport data? - if not skip_transport and all(sp.transport for sp in solution.species()): - trans_path = main_path / f'{basename}_tranport.dat' - write_transport_data(solution.species(), trans_path) - output_files.append(trans_path) - - return output_files - -