Minimize power | ASB + Neuralfoil #109
-
Hi Peter Thanks heaps for these awesome libraries. I am new to both Aerosandbox and NeuralFoil.
I tried changing log_transformations on and off for variables but was not able to get the solution to converge.
Could someone please point me to the right direction as of how this has to be changed in order to get sensible values ? Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hey Kanishka, Thanks for writing in, and for including the code snippet! The "diverging iterates" error means that the problem appears to be unbounded - some variable is being driven towards infinity. To determine which it is, we can modify our opti.solve(behavior_on_failure="return_last") This will make it so that the optimizer will ignore any With that line of code changed, we can re-run the code. On my machine, that produces: AR = 106891.89476263986
WA = 2.4438014646453908e+20 m^2
AL = 22382.562930647404
AS = 5.260276056086263e-07 The issue here is apparent - airspeed is going to zero. Some more digging (use So, the reason AeroSandbox isn't giving a solution is because no optimum (at least, with finite values) exists to the problem as-written if the objective is power minimization - mathematically, the problem is not well-posed. Two possible strategies (among many possible ones) to make the problem well-posed are:
weight = 100 * wing_area ** 1.5
wing_area = opti.variable(init_guess=1, upper_bound=10) Unrelated, but you probably also want to add sensible bounds constraints on alpha. NeuralFoil will handle any value of alpha just fine (e.g., if you ask for airfoil aerodynamics at Separately, you probably also want to add a |
Beta Was this translation helpful? Give feedback.
Hey Kanishka,
Thanks for writing in, and for including the code snippet!
The "diverging iterates" error means that the problem appears to be unbounded - some variable is being driven towards infinity.
To determine which it is, we can modify our
opti.solve()
command to instead say:This will make it so that the optimizer will ignore any
RuntimeError
that is raised (such as the diverging iterates error) and return the last value of the variables anyway. The reason this is not the default behavior (and is instead "opt-in" with thebehavior_on_failure
keyword argument) is to ensure that users are 100% aware that the result is not a converged solut…