Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use rug::Float instead of hard-coding f64 for arbitrary precision #109

Open
alessandromazza98 opened this issue May 31, 2024 · 1 comment

Comments

@alessandromazza98
Copy link

Hi,

have you ever thought about using rug::Float type instead of hard coding with f32 or f64 in order to let people that use your crate to have arbitrary precision over the data that are fed into the model.

I think it would be beneficial not only for me, but for all use cases that require a better precision than what an f64 type can provide - an f64 has 53 binary digits precision.

I would be open to work on the integration of Float instead of f32 / f64 if you are open to discuss it / merge it later.

What do you think?

Thanks,
Alessandro

@jinlow
Copy link
Owner

jinlow commented May 31, 2024

One of the main benefits of using f32 where it is being used, is speed and memory usage. I am not familiar with the rug:Float crate, do you know if the Float type is slower/uses more memory than the other built in f32 and f64 types?
Another thing to consider would be the interoperability of Float and the python wrapper. I would be curious if Float is able to be passed back and forth from python? Or if it has support on the pyO3 side? Does

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants