Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load piper into RAM/VRAM for persistence (remove model load time) #1

Closed
JohnnySn0w opened this issue Feb 15, 2024 · 0 comments · Fixed by #13
Closed

Load piper into RAM/VRAM for persistence (remove model load time) #1

JohnnySn0w opened this issue Feb 15, 2024 · 0 comments · Fixed by #13
Assignees
Labels
enhancement New feature or request

Comments

@JohnnySn0w
Copy link
Owner

JohnnySn0w commented Feb 15, 2024

Depends on #2
Piper currently has to be loaded per invocation. It would be preferable to load it into ram and submit requests to it via http or other messaging protocol, to make invocation easier, and to save on load times.

@JohnnySn0w JohnnySn0w added the enhancement New feature or request label Feb 26, 2024
@JohnnySn0w JohnnySn0w self-assigned this Mar 3, 2024
@JohnnySn0w JohnnySn0w changed the title Load piper into VRAM for persistence (remove model load time) Load piper into RAM/VRAM for persistence (remove model load time) Mar 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant