-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modeling does not produce reasonable values #1
Comments
Never mind, I didn't get the delta encoding right. |
Alright, 2 hours later, I believe I got the encoding right, and the time series frames in model.py have reasonable values (see updated input). But, now in make_frame the filter functions do filter away pretty much all the rows, as the insulin_quantile of 0.9 is close to zero (after all most 5 minute buckets don't have any insulin deviations). The results now at least looks very different based on the run, e.g. for 7 days For 1 day Some factor is probably still quite a bit off. Can you spot it? |
Hi @trixing . Did you do some more work on the @mariusae scripts and models?
According to these references: https://seemycgm.com/2017/10/21/exponential-insulin-curves-fiasp/ and https://github.com/LoopKit/Loop/blob/26b11d72399b99677423b176d5cf1826dedb8def/LoopCore/Insulin/ExponentialInsulinModelPreset.swift#L18-L51 a better version for Fiasp would be
Updates: I can also confirm I see a basal schedule a factor of ~10 too much. Will try to do some more investigation later. @trixing: I still didn't find the root cause why the basal schedule seems off. I also see almost no correction to ISF or CR.
I will investigate further |
@trixing I reimplemented the basals export and did some additions to your script (startdate/enddate, fix max instead of min). |
I ended up giving up on trying to fix it back then and implemented the gist of it as a colab model using scipy: https://colab.research.google.com/drive/1bbjvabg9_y98ULzO1ZVFFy2pc4gkLgJO?usp=sharing Feel free to play around with it. It's unfortunately pretty undocumented, but I'm sure you can follow along. To convert Nightscout data into the format "tune" (and the colab model) expects, I wrote an elaborate script back then, not sure if it still works, but feel free to give it a try. I pushed it here: https://github.com/trixing/tune/blob/master/nightscout_to_json.py |
I also pushed the converter and colab to my fork of tune, might be easier for collaboration than having the converter in another repository. |
Hi,
I wrote a quick nightscout-to-tune input converter (https://github.com/trixing/tune). It produces json files like attached, which seems to conform to the specified format (for 1, 3, 7 days respectively in this case).
Unfortunately the produced basal schedule is rather meaningless: a factor of ~10x too high, and also the same result if I run on 3 or 7 days.
3-day data
{"version": 1, "timezone": "Europe/Berlin", "insulin_sensitivity_schedule": {"index": [0, 390], "values": [170.0000000000016, 169.99999999999957]}, "carb_ratio_schedule": {"index": [0, 360, 660, 1050], "values": [9.000000000000052, 9.000000000000027, 9.000000000000002, 9.000000000000005]}, "basal_rate_schedule": {"index": [0, 60, 120, 180, 240, 300, 360, 420, 480, 540, 600, 660, 720, 780, 840, 900, 960, 1020, 1080, 1140, 1200, 1260, 1320, 1380], "values": [6.060164554997598, 5.789143251467735, 6.669406893763722, 7.100211844444492, 7.206123554462863, 7.198766166916485, 7.2001180743763005, 7.200090756536767, 7.1998197971922835, 7.200449445844885, 7.198854832952293, 7.202448204774257, 7.1960631269896504, 7.202852101333228, 7.2101639269212, 7.1397788680270615, 7.410869150932365, 6.530692446003393, 6.099377983276996, 5.995011592147588, 5.998777410458621, 6.0038361020430315, 5.997054760167179, 5.98992515397004]}, "training_loss": NaN}
7-day-data
{"version": 1, "timezone": "Europe/Berlin", "insulin_sensitivity_schedule": {"index": [0, 390], "values": [170.0000000000016, 169.99999999999957]}, "carb_ratio_schedule": {"index": [0, 360, 660, 1050], "values": [9.000000000000052, 9.000000000000027, 9.000000000000002, 9.000000000000005]}, "basal_rate_schedule": {"index": [0, 60, 120, 180, 240, 300, 360, 420, 480, 540, 600, 660, 720, 780, 840, 900, 960, 1020, 1080, 1140, 1200, 1260, 1320, 1380], "values": [6.060164554997598, 5.789143251467735, 6.669406893763722, 7.100211844444492, 7.206123554462863, 7.198766166916485, 7.2001180743763005, 7.200090756536767, 7.1998197971922835, 7.200449445844885, 7.198854832952293, 7.202448204774257, 7.1960631269896504, 7.202852101333228, 7.2101639269212, 7.1397788680270615, 7.410869150932365, 6.530692446003393, 6.099377983276996, 5.995011592147588, 5.998777410458621, 6.0038361020430315, 5.997054760167179, 5.98992515397004]}, "training_loss": NaN}
I'm uncertain though if I'm inputing everything correctly. My understanding is that "tune" expects a delta amount from the scheduled basal rate for example? The delta modeling is also easy to get wrong, but it looks correct.
I'd be interested to get the model to work, just as a reference point to autotune and manual tuning.
There is also code in model.py which does a cumsum ("undelta") on the basal durations. I'm not sure how that is useful but uncommenting the line doesn't change the result, which is a bit surprising. In general the algorithm seems a bit too stable?
Archive.zip
The text was updated successfully, but these errors were encountered: