You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
have you ever thought about using rug::Float type instead of hard coding with f32 or f64 in order to let people that use your crate to have arbitrary precision over the data that are fed into the model.
I think it would be beneficial not only for me, but for all use cases that require a better precision than what an f64 type can provide - an f64 has 53 binary digits precision.
I would be open to work on the integration of Float instead of f32 / f64 if you are open to discuss it / merge it later.
What do you think?
Thanks,
Alessandro
The text was updated successfully, but these errors were encountered:
One of the main benefits of using f32 where it is being used, is speed and memory usage. I am not familiar with the rug:Float crate, do you know if the Float type is slower/uses more memory than the other built in f32 and f64 types?
Another thing to consider would be the interoperability of Float and the python wrapper. I would be curious if Float is able to be passed back and forth from python? Or if it has support on the pyO3 side? Does
Hi,
have you ever thought about using rug::Float type instead of hard coding with
f32
orf64
in order to let people that use your crate to have arbitrary precision over the data that are fed into the model.I think it would be beneficial not only for me, but for all use cases that require a better precision than what an
f64
type can provide - anf64
has 53 binary digits precision.I would be open to work on the integration of
Float
instead off32
/f64
if you are open to discuss it / merge it later.What do you think?
Thanks,
Alessandro
The text was updated successfully, but these errors were encountered: